Stream Buffer crashes in qdv.dll Win7 - c++

I have this code for streaming to/from a stream buffer. I've left all error checking out to make the code easier to read and see what I'm doing.
ICaptureGraphBuilder2* pCaptureGraphBuilder = nullptr;
IGraphBuilder* pGraphCapture = nullptr;
IMediaControl* pControlCapture = nullptr;
IGraphBuilder* pGraphStream = nullptr;
IMediaControl* pControlStream = nullptr;
IBaseFilter* pFilterVideoIn = nullptr;
IBaseFilter* pFilterDVEncode = nullptr;
IBaseFilter* pFilterBufferCapture = nullptr;
IBaseFilter* pFilterBufferStream = nullptr;
// streams
IStreamBufferSink2* pBufferCapture = nullptr;
IStreamBufferSource* pBufferStream = nullptr;
// This sets up the streaming capture
HRESULT hr = CoCreateInstance(CLSID_CaptureGraphBuilder2,nullptr,CLSCTX_INPROC_SERVER,IID_PPV_ARGS(&pCaptureGraphBuilder));
hr = CoCreateInstance(CLSID_FilterGraph,nullptr,CLSCTX_INPROC_SERVER,IID_PPV_ARGS(&pGraphCapture));
hr = pCaptureGraphBuilder->SetFiltergraph(pGraphCapture);
hr = pGraphCapture->QueryInterface(IID_IMediaControl,(VOID**)&pControlCapture);
hr = EnumClassForFilterName(CLSID_VideoInputDeviceCategory,TEXT_CAPTUREDEVICE,&pFilterVideoIn); // This functions creates an instance of the capture device
hr = pGraphCapture->AddFilter(pFilterVideoIn,L"VideoIn");
hr = CoCreateInstance(CLSID_DVVideoEnc,nullptr,CLSCTX_INPROC_SERVER,IID_PPV_ARGS(&pFilterDVEncode));
hr = pGraphCapture->AddFilter(pFilterDVEncode,L"DVEncoder");
hr = CoCreateInstance(CLSID_StreamBufferSink,nullptr,CLSCTX_INPROC_SERVER,IID_PPV_ARGS(&pBufferCapture));
hr = pBufferCapture->QueryInterface(IID_PPV_ARGS(&pFilterBufferCapture));
hr = pGraphCapture->AddFilter(pFilterBufferCapture,L"BufferSink");
hr = pCaptureGraphBuilder->RenderStream(&PIN_CATEGORY_CAPTURE,&MEDIATYPE_Video,pFilterVideoIn,pFilterDVEncode,pFilterBufferCapture);
hr = pBufferCapture->LockProfile(L"C:\\save.tmp");
// now work on the stream in
hr = CoCreateInstance(CLSID_FilterGraph,nullptr,CLSCTX_INPROC_SERVER,IID_PPV_ARGS(&pGraphStream));
hr = pGraphStream->QueryInterface(IID_IMediaControl,(VOID**)&pControlStream);
hr = CoCreateInstance(CLSID_StreamBufferSource,nullptr,CLSCTX_INPROC_SERVER,IID_PPV_ARGS(&pBufferStream));
hr = pBufferStream->QueryInterface(IID_PPV_ARGS(&pFilterBufferStream));
hr = pGraphStream->AddFilter(pFilterBufferStream,L"StreamBufferSource");
hr = pBufferStream->SetStreamSink(pBufferCapture);
IEnumPins* pEnumPins = nullptr;
IPin* pPin = nullptr;
hr = pFilterBufferStream->EnumPins(&pEnumPins);
while(pEnumPins->Next(1,&pPin,nullptr) == S_OK){
hr = pGraphStream->Render(pPin);
pPin->Release();
}
// Which filters are really in the graph
// capture graph
OutputDebugString(L"-----------------------\n");
OutputDebugString(L"StreamIn Graph Filters\n");
OutputDebugString(L"\n");
IEnumFilters* pFilterEnum = nullptr;
IBaseFilter* pFilter = nullptr;
WCHAR name[256] = {0};
hr = pGraphCapture->EnumFilters(&pFilterEnum);
while(pFilterEnum->Next(1,&pFilter,nullptr) == S_OK){
FILTER_INFO fi={0};
pFilter->QueryFilterInfo(&fi);
StringCchPrintf(name,255,L"%s\n",fi.achName);
OutputDebugString(name);
fi.pGraph->Release();
pFilter->Release();
}
OutputDebugString(L"-----------------------\n");
// stream graph
OutputDebugString(L"-----------------------\n");
OutputDebugString(L"StreamSource Graph Filters\n");
OutputDebugString(L"\n");
pFilterEnum = nullptr;
pFilter = nullptr;
hr = pGraphStream->EnumFilters(&pFilterEnum);
while(pFilterEnum->Next(1,&pFilter,nullptr) == S_OK){
FILTER_INFO fi={0};
pFilter->QueryFilterInfo(&fi);
WCHAR name[256]={0};
StringCchPrintf(name,256,L"%s\n",fi.achName);
OutputDebugString(name);
fi.pGraph->Release();
pFilter->Release();
}
OutputDebugString(L"-----------------------\n");
hr = pControlCapture->Run();
hr = pControlStream->Run();
[SNIP]
and it crashes as soon as the 2nd graph starts to run with
First-chance exception at 0x000007fef3ce42e0 (qdv.dll) in StreamBuffer.exe: 0xC0000005: Access violation reading location 0x00000000000001c5.
Unhandled exception at 0x000007fef3ce42e0 (qdv.dll) in StreamBuffer.exe: 0xC0000005: Access violation reading location 0x00000000000001c5.
Which is telling me that something is not happy inside qdv.dll. So I downloaded the debug symbols to help diagnose whats really happening.
In the call stack I get
qdv.dll!CDVVideoCodec::Receive() + 0x220 bytes
qdv.dll!CTransformInputPin::Receive() + 0x4c bytes
qdv.dll!CBaseInputPin::ReceiveMultiple() + 0x49 bytes
sbe.dll!COutputQueue::ThreadProc() + 0xca bytes
sbe.dll!COutputQueue::InitialThreadProc() + 0x1c bytes
kernel32.dll!BaseThreadInitThunk() + 0xd bytes
ntdll.dll!RtlUserThreadStart() + 0x21 bytes
This tells me that perhaps I haven't supplied enough information in one of the graphs for DVEncoder. I've tried various settings for the codec but nothing works and I get the same crash in the same place with the same error.
All help is appreciated.

Related

How to use IAudioClient3 (WASAPI) with Real-Time Work Queue API

I'm working on a lowest-possible latency MIDI synthetizer software. I'm aware of ASIO and other alternatives, but as they have apparently made significant improvements to the WASAPI stack (in shared mode, at least), I'm curious to try it out. I first wrote a simple event-driven version of program, but as that's not the recommended way to do low-latency audio on Windows 10 (according to the docs), I'm trying to migrate to the Real-Time Work Queue API.
The documentation on Low Latency Audio states that it is recommended to use the Real-Time Work Queue API or MFCreateMFByteStreamOnStreamEx with WASAPI in order for the OS to manage work items in a way that will avoid interference from non-audio subsystems. This seems like a good idea, but the latter option seems to require some managed code (demonstrated in this WindowsAudioSession example), which I know nothing about and would preferably avoid (also the header Robytestream.h which has defs for the IRandomAccessStream isn't found on my system either).
The RTWQ example included in the docs is incomplete (doesn't compile as such), and I have made the necessary additions to make it compilable:
class my_rtqueue : IRtwqAsyncCallback {
public:
IRtwqAsyncResult* pAsyncResult;
RTWQWORKITEM_KEY workItemKey;
DWORD WorkQueueId;
STDMETHODIMP GetParameters(DWORD* pdwFlags, DWORD* pdwQueue)
{
HRESULT hr = S_OK;
*pdwFlags = 0;
*pdwQueue = WorkQueueId;
return hr;
}
//-------------------------------------------------------
STDMETHODIMP Invoke(IRtwqAsyncResult* pAsyncResult)
{
HRESULT hr = S_OK;
IUnknown* pState = NULL;
WCHAR className[20];
DWORD bufferLength = 20;
DWORD taskID = 0;
LONG priority = 0;
BYTE* pData;
hr = render_info.renderclient->GetBuffer(render_info.buffer_framecount, &pData);
ERROR_EXIT(hr);
update_buffer((unsigned short*)pData, render_info.framesize_bytes / (2*sizeof(unsigned short))); // 2 channels, sizeof(unsigned short) == 2
hr = render_info.renderclient->ReleaseBuffer(render_info.buffer_framecount, 0);
ERROR_EXIT(hr);
return S_OK;
}
STDMETHODIMP QueryInterface(const IID &riid, void **ppvObject) {
return 0;
}
ULONG AddRef() {
return 0;
}
ULONG Release() {
return 0;
}
HRESULT queue(HANDLE event) {
HRESULT hr;
hr = RtwqPutWaitingWorkItem(event, 1, this->pAsyncResult, &this->workItemKey);
return hr;
}
my_rtqueue() : workItemKey(0) {
HRESULT hr = S_OK;
IRtwqAsyncCallback* callback = NULL;
DWORD taskId = 0;
WorkQueueId = RTWQ_MULTITHREADED_WORKQUEUE;
//WorkQueueId = RTWQ_STANDARD_WORKQUEUE;
hr = RtwqLockSharedWorkQueue(L"Pro Audio", 0, &taskId, &WorkQueueId);
ERROR_THROW(hr);
hr = RtwqCreateAsyncResult(NULL, reinterpret_cast<IRtwqAsyncCallback*>(this), NULL, &pAsyncResult);
ERROR_THROW(hr);
}
int stop() {
HRESULT hr;
if (pAsyncResult)
pAsyncResult->Release();
if (0xFFFFFFFF != this->WorkQueueId) {
hr = RtwqUnlockWorkQueue(this->WorkQueueId);
if (FAILED(hr)) {
printf("Failed with RtwqUnlockWorkQueue 0x%x\n", hr);
return 0;
}
}
return 1;
}
};
And so, the actual WASAPI code (HRESULT error checking is omitted for clarity):
void thread_main(LPVOID param) {
HRESULT hr;
REFERENCE_TIME hnsRequestedDuration = 0;
IMMDeviceEnumerator* pEnumerator = NULL;
IMMDevice* pDevice = NULL;
IAudioClient3* pAudioClient = NULL;
IAudioRenderClient* pRenderClient = NULL;
WAVEFORMATEX* pwfx = NULL;
HANDLE hEvent = NULL;
HANDLE hTask = NULL;
UINT32 bufferFrameCount;
BYTE* pData;
DWORD flags = 0;
hr = RtwqStartup();
// also, hr is checked for errors every step of the way
hr = CoInitialize(NULL);
hr = CoCreateInstance(
CLSID_MMDeviceEnumerator, NULL,
CLSCTX_ALL, IID_IMMDeviceEnumerator,
(void**)&pEnumerator);
hr = pEnumerator->GetDefaultAudioEndpoint(
eRender, eConsole, &pDevice);
hr = pDevice->Activate(
IID_IAudioClient, CLSCTX_ALL,
NULL, (void**)&pAudioClient);
WAVEFORMATEX wave_format = {};
wave_format.wFormatTag = WAVE_FORMAT_PCM;
wave_format.nChannels = 2;
wave_format.nSamplesPerSec = 48000;
wave_format.nAvgBytesPerSec = 48000 * 2 * 16 / 8;
wave_format.nBlockAlign = 2 * 16 / 8;
wave_format.wBitsPerSample = 16;
UINT32 DP, FP, MINP, MAXP;
hr = pAudioClient->GetSharedModeEnginePeriod(&wave_format, &DP, &FP, &MINP, &MAXP);
printf("DefaultPeriod: %u, Fundamental period: %u, min_period: %u, max_period: %u\n", DP, FP, MINP, MAXP);
hr = pAudioClient->InitializeSharedAudioStream(AUDCLNT_STREAMFLAGS_EVENTCALLBACK, MINP, &wave_format, 0);
my_rtqueue* workqueue = NULL;
try {
workqueue = new my_rtqueue();
}
catch (...) {
hr = E_ABORT;
ERROR_EXIT(hr);
}
hr = pAudioClient->GetBufferSize(&bufferFrameCount);
PWAVEFORMATEX wf = &wave_format;
UINT32 current_period;
pAudioClient->GetCurrentSharedModeEnginePeriod(&wf, &current_period);
INT32 FrameSize_bytes = bufferFrameCount * wave_format.nChannels * wave_format.wBitsPerSample / 8;
printf("bufferFrameCount: %u, FrameSize_bytes: %d, current_period: %u\n", bufferFrameCount, FrameSize_bytes, current_period);
hr = pAudioClient->GetService(
IID_IAudioRenderClient,
(void**)&pRenderClient);
render_info.framesize_bytes = FrameSize_bytes;
render_info.buffer_framecount = bufferFrameCount;
render_info.renderclient = pRenderClient;
hEvent = CreateEvent(nullptr, false, false, nullptr);
if (hEvent == INVALID_HANDLE_VALUE) { ERROR_EXIT(0); }
hr = pAudioClient->SetEventHandle(hEvent);
const size_t num_samples = FrameSize_bytes / sizeof(unsigned short);
DWORD taskIndex = 0;
hTask = AvSetMmThreadCharacteristics(TEXT("Pro Audio"), &taskIndex);
if (hTask == NULL) {
hr = E_FAIL;
}
hr = pAudioClient->Start(); // Start playing.
running = 1;
while (running) {
workqueue->queue(hEvent);
}
workqueue->stop();
hr = RtwqShutdown();
delete workqueue;
running = 0;
return 1;
}
This seems to kind of work (ie. audio is being output), but on every other invocation of my_rtqueue::Invoke(), IAudioRenderClient::GetBuffer() returns a HRESULT of 0x88890006 (-> AUDCLNT_E_BUFFER_TOO_LARGE), and the actual audio output is certainly not what I intend it to be.
What issues are there with my code? Is this the right way to use RTWQ with WASAPI?
Turns out there were a number of issues with my code, none of which had really anything to do with Rtwq. The biggest issue was me assuming that the shared mode audio stream was using 16-bit integer samples, when in reality my audio was setup for 32-bit float format (WAVE_FORMAT_IEEE_FLOAT). The currently active shared mode format, period etc. should be fetched like this:
WAVEFORMATEX *wavefmt = NULL;
UINT32 current_period = 0;
hr = pAudioClient->GetCurrentSharedModeEnginePeriod((WAVEFORMATEX**)&wavefmt, &current_period);
wavefmt now contains the output format info of the current shared mode. If the wFormatTag field is equal to WAVE_FORMAT_EXTENSIBLE, one needs to cast WAVEFORMATEX to WAVEFORMATEXTENSIBLE to see what the actual format is. After this, one needs to fetch the minimum period supported by the current setup, like so:
UINT32 DP, FP, MINP, MAXP;
hr = pAudioClient->GetSharedModeEnginePeriod(wavefmt, &DP, &FP, &MINP, &MAXP);
and then initialize the audio stream with the new InitializeSharedAudioStream function:
hr = pAudioClient->InitializeSharedAudioStream(AUDCLNT_STREAMFLAGS_EVENTCALLBACK, MINP, wavefmt, NULL);
... get the buffer's actual size:
hr = pAudioClient->GetBufferSize(&render_info.buffer_framecount);
and use GetCurrentPadding in the Get/ReleaseBuffer logic:
UINT32 pad = 0;
hr = render_info.audioclient->GetCurrentPadding(&pad);
int actual_size = (render_info.buffer_framecount - pad);
hr = render_info.renderclient->GetBuffer(actual_size, &pData);
if (SUCCEEDED(hr)) {
update_buffer((float*)pData, actual_size);
hr = render_info.renderclient->ReleaseBuffer(actual_size, 0);
ERROR_EXIT(hr);
}
The documentation for IAudioClient::Initialize states the following about shared mode streams (I assume it also applies to the new IAudioClient3):
Each time the thread awakens, it should call IAudioClient::GetCurrentPadding to determine how much data to write to a rendering buffer or read from a capture buffer. In contrast to the two buffers that the Initialize method allocates for an exclusive-mode stream that uses event-driven buffering, a shared-mode stream requires a single buffer.
Using GetCurrentPadding solves the problem with AUDCLNT_E_BUFFER_TOO_LARGE, and feeding the buffer with 32-bit float samples instead of 16-bit integers makes the output sound fine on my system (although the effect was quite funky!).
If someone comes up with better/more correct ways to use the Rtwq API, I would love to hear them.

running .net application from c++

I have a byte array of a .NET application inside c++ code.
I want to execute this .NET application without writing those bytes on the disk. ICLRRuntimeHost::ExecuteInDefaultAppDomain expects a path to the assembly so it's out of the equation here. I'm looking for a possible way (or hack) to pass the binary directly to the clr.
So what i can do ?
//todo error checks/cleanup
HRESULT hr;
ICLRMetaHost *pMetaHost = NULL;
ICLRRuntimeInfo *pRuntimeInfo = NULL;
ICorRuntimeHost *pCorRuntimeHost = NULL;
IUnknownPtr spAppDomainThunk = NULL;
_AppDomainPtr spDefaultAppDomain = NULL;
bstr_t bstrAssemblyName(L"");
_AssemblyPtr spAssembly = NULL;
bstr_t bstrClassName(L"");
_TypePtr spType = NULL;
variant_t vtEmpty;
bstr_t bstrStaticMethodName(L"Main");
variant_t vtLengthRet;
hr = CLRCreateInstance(CLSID_CLRMetaHost, IID_PPV_ARGS(&pMetaHost));
const wchar_t* pszVersion = L"v2.0.50727";
hr = pMetaHost->GetRuntime(pszVersion, IID_PPV_ARGS(&pRuntimeInfo));
BOOL fLoadable;
hr = pRuntimeInfo->IsLoadable(&fLoadable);
if (!fLoadable) { wprintf(L".NET runtime %s cannot be loaded\n", pszVersion); return; }
hr = pRuntimeInfo->GetInterface(CLSID_CorRuntimeHost, IID_PPV_ARGS(&pCorRuntimeHost));
hr = pCorRuntimeHost->Start();
hr = pCorRuntimeHost->GetDefaultDomain(&spAppDomainThunk);
hr = spAppDomainThunk->QueryInterface(IID_PPV_ARGS(&spDefaultAppDomain));
SAFEARRAYBOUND bounds[1];
bounds[0].cElements = array_len;
bounds[0].lLbound = 0;
SAFEARRAY* arr = SafeArrayCreate(VT_UI1, 1, bounds);
SafeArrayLock(arr);
memcpy(arr->pvData, bytearray, array_len);
SafeArrayUnlock(arr);
hr = spDefaultAppDomain->Load_3(arr, &spAssembly);
hr = spAssembly->GetType_2(bstrClassName, &spType);
hr = spType->InvokeMember_3(bstrStaticMethodName, static_cast<BindingFlags>(BindingFlags_InvokeMethod | BindingFlags_Static | BindingFlags_Public), NULL, vtEmpty, nullptr, &vtLengthRet);
SafeArrayDestroy(arr);

Capture a frame from video using directshow filters in C++

I have taken a code from the net to capture a frame from a video file and modified to capture all frames and store it as bmp images.
HRESULT GrabVideoBitmap(PCWSTR pszVideoFile)
{
IGraphBuilder *pGraph = NULL;
IMediaControl *pControl = NULL;
IMediaEventEx *pEvent = NULL;
IBaseFilter *pGrabberF = NULL;
ISampleGrabber *pGrabber = NULL;
IBaseFilter *pSourceF = NULL;
IEnumPins *pEnum = NULL;
IPin *pPin = NULL;
IBaseFilter *pNullF = NULL;
long evCode;
wchar_t temp[10];
wchar_t framename[50] = IMAGE_FILE_PATH; // L"D:\\sampleframe";
BYTE *pBuffer = NULL;
HRESULT hr = CoInitialize(NULL);
if (FAILED(hr))
return 0;
hr = CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&pGraph));
hr = pGraph->QueryInterface(IID_PPV_ARGS(&pControl));
hr = pGraph->QueryInterface(IID_PPV_ARGS(&pEvent));
// Create the Sample Grabber filter.
hr = CoCreateInstance(CLSID_SampleGrabber, NULL, CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&pGrabberF));
hr = pGraph->AddFilter(pGrabberF, L"Sample Grabber");
hr = pGrabberF->QueryInterface(IID_PPV_ARGS(&pGrabber));
// Displays the metadata of the file
DisplayFileInfo((wchar_t*)pszVideoFile); // to display video information
AM_MEDIA_TYPE mt;
ZeroMemory(&mt, sizeof(mt));
mt.majortype = MEDIATYPE_Video;
mt.subtype = MEDIASUBTYPE_RGB24;
hr = pGrabber->SetMediaType(&mt);
hr = pGraph->AddSourceFilter(pszVideoFile, L"Source", &pSourceF);
hr = pSourceF->EnumPins(&pEnum);
while (S_OK == pEnum->Next(1, &pPin, NULL))
{
hr = ConnectFilters(pGraph, pPin, pGrabberF);
SafeRelease(&pPin);
if (SUCCEEDED(hr))
{
break;
}
}
hr = CoCreateInstance(CLSID_NullRenderer, NULL, CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&pNullF));
hr = pGraph->AddFilter(pNullF, L"Null Filter");
hr = ConnectFilters(pGraph, pGrabberF, pNullF);
hr = pGrabber->SetOneShot(TRUE);
hr = pGrabber->SetBufferSamples(TRUE);
hr = pControl->Run();
hr = pEvent->WaitForCompletion(INFINITE, &evCode);
for (int i = 0; i < 10; i++)
{
// Find the required buffer size.
long cbBuffer;
hr = pGrabber->GetCurrentBuffer(&cbBuffer, NULL);
pBuffer = (BYTE*)CoTaskMemAlloc(cbBuffer);
hr = pGrabber->GetCurrentBuffer(&cbBuffer, (long*)pBuffer);
hr = pGrabber->GetConnectedMediaType(&mt);
// Examine the format block.
if ((mt.formattype == FORMAT_VideoInfo) &&
(mt.cbFormat >= sizeof(VIDEOINFOHEADER)) &&
(mt.pbFormat != NULL))
{
swprintf(temp, 5, L"%d", i);
wcscat_s(framename, temp);
wcscat_s(framename, L".bmp");
VIDEOINFOHEADER *pVih = (VIDEOINFOHEADER*)mt.pbFormat;
hr = WriteBitmap((PCWSTR)framename, &pVih->bmiHeader,
mt.cbFormat - SIZE_PREHEADER, pBuffer, cbBuffer);
wcscpy_s(framename, IMAGE_FILE_PATH);
}
else
{
// Invalid format.
hr = VFW_E_INVALIDMEDIATYPE;
}
FreeMediaType(mt);
}
done:
CoTaskMemFree(pBuffer);
SafeRelease(&pPin);
SafeRelease(&pEnum);
SafeRelease(&pNullF);
SafeRelease(&pSourceF);
SafeRelease(&pGrabber);
SafeRelease(&pGrabberF);
SafeRelease(&pControl);
SafeRelease(&pEvent);
SafeRelease(&pGraph);
return hr;
}
The input video file has 132 frames.
But only 68 images are generated.
Also last frame of the video is captured for the last 38 images.
I think the directshow graph is running continuously and WriteBitmap() is missing frames.
How to get the control in directX to capture one frame and write it to bmp file and capture the next frame and thus capture all the frames as bmp images.
Thanks
Arun
Your approach is wrong. Currently, you set the sample grabber to one shot and after that you wait for the graph completion. This way it only works for capturing a single frame. You need to capture the frames inside the ISampleGrabberCB callback of your pGrabber. You need to implement ISampleGrabberCB interface and use ISampleGrabber::SetCallback on your pGrabber filter to point it to your implementation. After that you can capture the frames inside either SampleCB or BufferCB methods. http://www.infognition.com/blog/2013/accessing_raw_video_in_directshow.html

WIC increases the size of TIFF image

Scenario:
Load a TIFF image and extract the frames of tiff image and save it locally.
Combine the extracted frames to the output TIFF image.
When i try to combine the frames, the size of the output tiff image is increasing drastically. For example if my input size if 40 MB , the output is increased to 300 MB.
Below is the code which explains the scenario,
void readTiff()
{
HRESULT hr;
IWICBitmapFrameDecode *frameDecode = NULL;
IWICFormatConverter *formatConverter = NULL;
IWICBitmapEncoder *encoder = NULL;
IWICStream *pOutStream = NULL;
IWICBitmapFrameEncode *frameEncode = NULL;
IWICImagingFactory* m_pWICFactory;
hr = CoCreateInstance(
CLSID_WICImagingFactory,
NULL,
CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&m_pWICFactory)
);
IWICBitmapDecoder *pIDecoder = NULL;
hr = m_pWICFactory->CreateDecoderFromFilename(
L"D:\\test28\\Multitiff_files\\300dpiTIFF40MB_WATER.tif", // Image to be decoded
NULL, // Do not prefer a particular vendor
GENERIC_READ, // Desired read access to the file
WICDecodeMetadataCacheOnDemand, // Cache metadata when needed
&pIDecoder // Pointer to the decoder
);
UINT frameCount = 0;
pIDecoder->GetFrameCount(&frameCount);
for (int i = 0; i < frameCount; i++)
{
wchar_t temp[200];
int j = i;
swprintf_s(temp, 200, L"D:\\test28\\Multitiff_files\\out\\filename_png%d.jpeg", i);
if (SUCCEEDED(hr))
hr = m_pWICFactory->CreateStream(&pOutStream);
if (SUCCEEDED(hr))
hr = pOutStream->InitializeFromFilename(temp, GENERIC_WRITE);
if (SUCCEEDED(hr))
hr = m_pWICFactory->CreateEncoder(GUID_ContainerFormatJpeg, NULL, &encoder);
if (SUCCEEDED(hr))
hr = encoder->Initialize(pOutStream, WICBitmapEncoderNoCache);
hr = pIDecoder->GetFrame(i, &frameDecode);
if (SUCCEEDED(hr))
hr = m_pWICFactory->CreateFormatConverter(&formatConverter);
hr = formatConverter->Initialize(
frameDecode, // Source frame to convert
GUID_WICPixelFormat8bppIndexed, // The desired pixel format
WICBitmapDitherTypeNone, // The desired dither pattern
NULL, // The desired palette
0.f, // The desired alpha threshold
WICBitmapPaletteTypeMedianCut // Palette translation type
);
IPropertyBag2 *pPropertybag = NULL;
//Create a new frame..
hr = encoder->CreateNewFrame(&frameEncode, &pPropertybag);
//PROPBAG2 option = { 0 };
//option.pstrName = L"ImageQuality";
//VARIANT varValue;
//VariantInit(&varValue);
//varValue.vt = VT_R4;
//varValue.fltVal = 0.01f;
//hr = pPropertybag->Write(
// 1, // number of properties being set
// &option,
// &varValue);
WICPixelFormatGUID pixelFormat;
hr = frameEncode->Initialize(pPropertybag);
//pixelFormat = GUID_WICPixelFormat8bppIndexed;
frameDecode->GetPixelFormat(&pixelFormat);
hr = frameEncode->SetPixelFormat(&pixelFormat);
hr = frameEncode->WriteSource(formatConverter, NULL);
frameEncode->Commit();
encoder->Commit();
if (formatConverter)
formatConverter->Release();
if (frameDecode)
frameDecode->Release();
if (frameEncode)
frameEncode->Release();
if (pOutStream)
pOutStream->Release();
if (encoder)
encoder->Release();
}
if (m_pWICFactory)
m_pWICFactory->Release();
pIDecoder->Release();
}
void combineFile(int count)
{
HRESULT hr;
IWICImagingFactory *pFactory = NULL;
IWICStream *pInStream = NULL;
IWICBitmapDecoder *decoder = NULL;
IWICBitmapFrameDecode *frameDecode = NULL;
IWICFormatConverter *formatConverter = NULL;
IWICBitmapEncoder *encoder = NULL;
IWICStream *pOutStream = NULL;
IWICBitmapFrameEncode *frameEncode = NULL;
hr = CoCreateInstance(CLSID_WICImagingFactory, NULL, CLSCTX_INPROC_SERVER, IID_IWICImagingFactory, (LPVOID*)&pFactory);
if (!SUCCEEDED(hr)) {
hr = CoCreateInstance(CLSID_WICImagingFactory, NULL, CLSCTX_INPROC_SERVER, IID_IWICImagingFactory, (LPVOID*)&pFactory);
}
if (SUCCEEDED(hr))
hr = pFactory->CreateStream(&pOutStream);
if (SUCCEEDED(hr))
hr = pOutStream->InitializeFromFilename(L"D:\\test28\\Multitiff_files\\out\\out.tiff", GENERIC_WRITE);
if (SUCCEEDED(hr))
hr = pFactory->CreateEncoder(GUID_ContainerFormatWmp, NULL, &encoder);
if (SUCCEEDED(hr))
hr = encoder->Initialize(pOutStream, WICBitmapEncoderNoCache);
for (int i = 0; i < count; i++)
{
wchar_t temp[200];
swprintf_s(temp, 200, L"D:\\test28\\Multitiff_files\\out\\filename_png%d.jpeg", i);
if (SUCCEEDED(hr))
hr = pFactory->CreateStream(&pInStream);
if (SUCCEEDED(hr))
hr = pInStream->InitializeFromFilename(temp, GENERIC_READ);
if (SUCCEEDED(hr))
hr = pFactory->CreateDecoderFromStream(pInStream, NULL, WICDecodeMetadataCacheOnLoad, &decoder);
if (SUCCEEDED(hr))
hr = pFactory->CreateFormatConverter(&formatConverter);
hr = decoder->GetFrame(0, &frameDecode);
//hr = formatConverter->Initialize(
// frameDecode, // Source frame to convert
// GUID_WICPixelFormat8bppIndexed, // The desired pixel format
// WICBitmapDitherTypeNone, // The desired dither pattern
// NULL, // The desired palette
// 0.f, // The desired alpha threshold
// WICBitmapPaletteTypeMedianCut // Palette translation type
// );
WICPixelFormatGUID pixelFormat;
frameDecode->GetPixelFormat(&pixelFormat);
IPropertyBag2 *pPropertybag = NULL;
//Create a new frame..
hr = encoder->CreateNewFrame(&frameEncode, &pPropertybag);
PROPBAG2 option = { 0 };
option.pstrName = L"TiffCompressionMethod";
VARIANT varValue;
VariantInit(&varValue);
varValue.vt = VT_UI1;
varValue.bVal = WICTiffCompressionOption::WICTiffCompressionZIP;
hr = pPropertybag->Write(1, &option, &varValue);
hr = frameEncode->Initialize(pPropertybag);
hr = frameEncode->SetPixelFormat(&pixelFormat);
hr = frameEncode->WriteSource(formatConverter, NULL);
hr = frameEncode->Commit();
if (pInStream)
pInStream->Release();
if (decoder)
decoder->Release();
if (formatConverter)
formatConverter->Release();
if (frameEncode)
frameEncode->Release();
//if (frameDecode)
//frameDecode->Release();
}
encoder->Commit();
if (pFactory)
pFactory->Release();
if (encoder)
encoder->Release();
if (pOutStream)
pOutStream->Release();
}
One easy way to debug this kind of issue is to use ImageMagick which is installed on most Linux distros and is available for OSX and Windows. First use the identify utility within the suite with its verbose option to find out everything about your before and after images like this:
# Find out all we know about first image and put in file "1.txt"
identify -verbose image1.tif > 1.txt
# Find out all we know about second image and put in file "2.txt"
identify -verbose image2.tif > 2.txt
Now use your favourite file comparison tool to see the differences:
opendiff 1.txt 2.txt

How to distinguish between items returned by EnumObjects of IShellFolder

I am trying to get the computers in my workgroup using Shell
HRESULT hr = S_OK;
ULONG celtFetched;
LPITEMIDLIST pidlItems = NULL;
LPITEMIDLIST netItemIdLst = NULL;
IShellFolder* pFolder = NULL;
IEnumIDList* pEnumIds = NULL;
IShellFolder* pDesktopFolder = NULL;
LPITEMIDLIST full_pid;
hr = SHGetMalloc(&pMalloc);
hr = SHGetDesktopFolder(&pDesktopFolder);
hr = SHGetSpecialFolderLocation(NULL, CSIDL_COMPUTERSNEARME, &netItemIdLst);
hr = pDesktopFolder->BindToObject(netItemIdLst, NULL,
IID_IShellFolder, (void **)&pFolder);
if(SUCCEEDED(hr))
{
hr = pFolder->EnumObjects(NULL, SHCONTF_NONFOLDERS, &pEnumIds);
if(SUCCEEDED(hr))
{
STRRET strDisplayName;
while((hr = pEnumIds->Next(1, &pidlItems, &celtFetched)) == S_OK && celtFetched > 0)
{
hr = pFolder->GetDisplayNameOf(pidlItems, SHGDN_NORMAL, &strDisplayName);
if(SUCCEEDED(hr))
{
wprintf(L"%s\n", strDisplayName.pOleStr);
}
}
}
}
But this returns the Computers, Media Devices, and Streaming Devices (is that it name?).
Like say, a computer is called OSOFEM on my workgroup... STRRET::pOleStr returns strings OSOFEM (which is the computer), OSOFEM (I think streaming device), and OSOFEM:example#email.com: (which is the Windows Media Player).
My question is how can I distinguish which is the Computer? Of course, there names can't be used because two exactly the same name will always be returned...