How to write an image to STDOUT with Wincodec.h (WIC) - c++

I came across this example and couldn't figure out how to change stream->InitializeFromFilename to something that outputs to STDOUT, inside the HRESULT SavePixelsToFile32bppPBGRA(UINT width, UINT height, UINT stride, LPBYTE pixels, LPWSTR filePath, const GUID &format) function. Tried using stream->InitializeFromMemory but hit a wall due to minimal C++ experience.
Reasons behind this is that i'm building something and would like to pipe the stdout of the example using something else.
Regards in advance

Thanks to #SimonMourier, I was able to update his code to achieve what was needed.
SavePixelsToFile32bppPBGRA now writes the data to an IStream pointer, streamOut. It also invokes a new function at the end, to output data.
HRESULT SavePixelsToFile32bppPBGRA(UINT width, UINT height, UINT stride, LPBYTE pixels, const GUID& format)
{
if (!pixels)
return E_INVALIDARG;
HRESULT hr = S_OK;
IWICImagingFactory* factory = nullptr;
IWICBitmapEncoder* encoder = nullptr;
IWICBitmapFrameEncode* frame = nullptr;
IWICStream* streamOut = nullptr;
IStream * streamIn = nullptr;
GUID pf = GUID_WICPixelFormat32bppPBGRA;
BOOL coInit = CoInitialize(nullptr);
HRCHECK(CoCreateInstance(CLSID_WICImagingFactory, nullptr, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&factory)));
HRCHECK(factory->CreateStream(&streamOut));
HRCHECK(CreateStreamOnHGlobal(NULL, true, &streamIn));
HRCHECK(streamOut->InitializeFromIStream(streamIn));
HRCHECK(factory->CreateEncoder(format, nullptr, &encoder));
HRCHECK(encoder->Initialize(streamOut, WICBitmapEncoderNoCache));
HRCHECK(encoder->CreateNewFrame(&frame, nullptr)); // we don't use options here
HRCHECK(frame->Initialize(nullptr)); // we dont' use any options here
HRCHECK(frame->SetSize(width, height));
HRCHECK(frame->SetPixelFormat(&pf));
HRCHECK(frame->WritePixels(height, stride, stride * height, pixels));
HRCHECK(frame->Commit());
HRCHECK(encoder->Commit());
sendIStreamToOutput(streamOut);
cleanup:
RELEASE(streamIn);
RELEASE(streamOut);
RELEASE(frame);
RELEASE(encoder);
RELEASE(factory);
if (coInit) CoUninitialize();
return hr;
}
sendIStreamToOutput reads the data from the IStream and writes it to std::cout (with binary mode active).
void sendIStreamToOutput (IStream * pIStream) {
STATSTG sts;
pIStream->Stat(&sts, STATFLAG_DEFAULT);
ULARGE_INTEGER uli = sts.cbSize;
LARGE_INTEGER zero;
zero.QuadPart = 0;
ULONG size = (ULONG)uli.QuadPart;
char* bits = new char[size];
ULONG written;
pIStream->Seek(zero, STREAM_SEEK_SET, NULL);
pIStream->Read(bits, size, &written);
std::ostream& lhs = std::cout;
_setmode(_fileno(stdout), _O_BINARY);
lhs.write(bits, size);
fflush(stdout);
delete[] bits;
}

Related

How to record continuous raw audio data into a circular buffer with C++ on Windows 10?

Since Windows Multimedia turned out to be utterly incapable of recording continuous audio, I got the hint to use Windows Core Audio. There is sort of a manual here, but I can't figure out how to write the loads of overhead code to get the recording working. Can anyone provide a complete, minimal implementation of continuous audio recording to a circular buffer?
So far I am stuck at the code below not getting past the line pEnumerator->GetDefaultAudioEndpoint(eRender, eConsole, &pDevice); because pEnumerator remains nullptr.
#define VC_EXTRALEAN
#define _USE_MATH_DEFINES
#include <Windows.h>
#include <Audioclient.h>
#include <Mmdeviceapi.h>
#define REFTIMES_PER_SEC 10000000
#define REFTIMES_PER_MILLISEC 10000
int main() {
REFERENCE_TIME hnsRequestedDuration = REFTIMES_PER_SEC;
UINT32 bufferFrameCount;
UINT32 numFramesAvailable;
IMMDeviceEnumerator* pEnumerator = NULL;
IMMDevice* pDevice = NULL;
IAudioClient* pAudioClient = NULL;
IAudioCaptureClient* pCaptureClient = NULL;
WAVEFORMATEX* pwfx = NULL;
UINT32 packetLength = 0;
BYTE* pData;
DWORD flags;
CoCreateInstance(__uuidof(MMDeviceEnumerator), NULL, CLSCTX_ALL, __uuidof(IMMDeviceEnumerator), (void**)&pEnumerator);
pEnumerator->GetDefaultAudioEndpoint(eRender, eConsole, &pDevice);
pDevice->Activate(__uuidof(IAudioClient), CLSCTX_ALL, NULL, (void**)&pAudioClient);
pAudioClient->GetMixFormat(&pwfx);
pAudioClient->Initialize(AUDCLNT_SHAREMODE_SHARED, AUDCLNT_STREAMFLAGS_LOOPBACK, hnsRequestedDuration, 0, pwfx, NULL);
pAudioClient->GetBufferSize(&bufferFrameCount); // Get the size of the allocated buffer.
pAudioClient->GetService(__uuidof(IAudioCaptureClient), (void**)&pCaptureClient);
// Calculate the actual duration of the allocated buffer.
REFERENCE_TIME hnsActualDuration = (double)REFTIMES_PER_SEC* bufferFrameCount / pwfx->nSamplesPerSec;
pAudioClient->Start(); // Start recording.
// Each loop fills about half of the shared buffer.
while(true) {
// Sleep for half the buffer duration.
Sleep(hnsActualDuration/REFTIMES_PER_MILLISEC/2);
pCaptureClient->GetNextPacketSize(&packetLength);
while(packetLength != 0) {
// Get the available data in the shared buffer.
pCaptureClient->GetBuffer(&pData, &numFramesAvailable, &flags, NULL, NULL);
if(flags&AUDCLNT_BUFFERFLAGS_SILENT) {
pData = NULL; // Tell CopyData to write silence.
}
// Copy the available capture data to the audio sink.
//hr = pMySink->CopyData(pData, numFramesAvailable, &bDone);
pCaptureClient->ReleaseBuffer(numFramesAvailable);
pCaptureClient->GetNextPacketSize(&packetLength);
}
}
pAudioClient->Stop();
return 0;
}
EDIT (24.07.2021):
Here is an update of my code for troubleshooting:
#define VC_EXTRALEAN
#define _USE_MATH_DEFINES
#include <Windows.h>
#include <Audioclient.h>
#include <Mmdeviceapi.h>
#include <chrono>
class Clock {
private:
typedef chrono::high_resolution_clock clock;
chrono::time_point<clock> t;
public:
Clock() { start(); }
void start() { t = clock::now(); }
double stop() const { return chrono::duration_cast<chrono::duration<double>>(clock::now()-t).count(); }
};
const uint base = 4096;
const uint sample_rate = 48000; // must be supported by microphone
const uint sample_size = 1*base; // must be a power of 2
const uint bandwidth = 5000; // must be <= sample_rate/2
float* wave = new float[sample_size]; // circular buffer
void fill(float* const wave, const float* const buffer, int offset) {
for(int i=sample_size; i>=offset; i--) {
wave[i] = wave[i-offset];
}
for(int i=0; i<offset; i++) {
const uint p = offset-1-i;
wave[i] = 0.5f*(buffer[2*p]+buffer[2*p+1]); // left and right channels
}
}
int main() {
for(uint i=0; i<sample_size; i++) wave[i] = 0.0f;
Clock clock;
#define REFTIMES_PER_SEC 10000000
#define REFTIMES_PER_MILLISEC 10000
REFERENCE_TIME hnsRequestedDuration = REFTIMES_PER_SEC;
UINT32 bufferFrameCount;
UINT32 numFramesAvailable;
IMMDeviceEnumerator* pEnumerator = NULL;
IMMDevice* pDevice = NULL;
IAudioClient* pAudioClient = NULL;
IAudioCaptureClient* pCaptureClient = NULL;
WAVEFORMATEX* pwfx = NULL;
UINT32 packetLength = 0;
BYTE* pData;
DWORD flags;
CoInitializeEx(NULL, COINIT_MULTITHREADED);
CoCreateInstance(__uuidof(MMDeviceEnumerator), NULL, CLSCTX_ALL, __uuidof(IMMDeviceEnumerator), (void**)&pEnumerator);
pEnumerator->GetDefaultAudioEndpoint(eRender, eConsole, &pDevice);
pDevice->Activate(__uuidof(IAudioClient), CLSCTX_ALL, NULL, (void**)&pAudioClient);
pAudioClient->GetMixFormat(&pwfx);
println(pwfx->wFormatTag);// 65534
println(WAVE_FORMAT_PCM);// 1
println(pwfx->nChannels);// 2
println((uint)pwfx->nSamplesPerSec);// 48000
println(pwfx->wBitsPerSample);// 32
println(pwfx->nBlockAlign);// 8
println(pwfx->wBitsPerSample*pwfx->nChannels/8);// 8
println((uint)pwfx->nAvgBytesPerSec);// 384000
println((uint)(pwfx->nBlockAlign*pwfx->nSamplesPerSec*pwfx->nChannels));// 768000
println(pwfx->cbSize);// 22
pAudioClient->Initialize(AUDCLNT_SHAREMODE_SHARED, AUDCLNT_STREAMFLAGS_LOOPBACK, hnsRequestedDuration, 0, pwfx, NULL);
pAudioClient->GetBufferSize(&bufferFrameCount); // Get the size of the allocated buffer.
pAudioClient->GetService(__uuidof(IAudioCaptureClient), (void**)&pCaptureClient);
// Calculate the actual duration of the allocated buffer.
//REFERENCE_TIME hnsActualDuration = (double)REFTIMES_PER_SEC* bufferFrameCount / pwfx->nSamplesPerSec;
pAudioClient->Start(); // Start recording.
while(running) {
pCaptureClient->GetNextPacketSize(&packetLength); // packetLength and numFramesAvailable are either 0 or 480
pCaptureClient->GetBuffer(&pData, &numFramesAvailable, &flags, NULL, NULL);
const int offset = (uint)numFramesAvailable;
if(offset>0) {
fill(wave, (float*)pData, offset); // here I add pData to the circular buffer "wave"
}
while(packetLength != 0) {
pCaptureClient->GetBuffer(&pData, &numFramesAvailable, &flags, NULL, NULL); // Get the available data in the shared buffer.
if(flags&AUDCLNT_BUFFERFLAGS_SILENT) {
pData = NULL; // Tell CopyData to write silence.
}
pCaptureClient->ReleaseBuffer(numFramesAvailable);
pCaptureClient->GetNextPacketSize(&packetLength);
}
sleep(1.0/120.0-clock.stop());
clock.start();
}
pAudioClient->Stop();
}
You're not calling CoInitializeEx, so all COM calls will fail.
You should also be testing all calls to see if they return an error.
To address the questions posed in the comments:
I believe that if you want to operate the endpoint in shared mode then you have to use the parameters returned by GetFixFormat. This means that:
you are limited to the one sample rate (unless you write code to perform a conversion, which is a non-trivial task)
if you want the samples as floats, you will have to convert them yourself
To write code that runs on all machines, you must cater for whatever the mix format throws at you. This might be:
16 bit integers
24 bit integers (nBlockAlign = 3)
24 bit integers in 32 bit containers (nBlockAlign = 4)
32 bit integers
32 bit floating point (rare)
64 bit floating point (unheard of, in my experience)
The samples will be in the native byte order of the machine your code is running on, and are interleaved.
So, case out on the various parameters in pwfx and write the relevant code for each sample format you want to support.
Assuming you want your floats to be normalised to -1 .. +1, and 2-channel input data, you might do this for 16 bit integers, for example:
const int16_t *inbuf = (const int16_t *) pData;
float *outbuf = ...;
for (int i = 0; i < numFramesAvailable * 2; ++i)
{
int16_t sample = *inbuf++;
*outbuf++ = (float) (sample * (1.0 / 32767));
}
Note that I avoid a (slow) floating point division by multiplying by the reciprocal (the compiler will pre-calculate 1.0 / 32767).
I'll leave the rest to you.
You could use this audio library instead. Its way easier to get up and running than trying to interface with the platform specific SDKs:
http://www.music.mcgill.ca/~gary/rtaudio/recording.html
Also, while removing the sleep might not help in your example you should never call sleep, lock a mutex, or allocate memory during audio processing. The delay introduced by those is completely arbitrary compared to the short buffer times, so will always create problems for you.

How to convert IStream to Base64 string in c++

I have IStream and I want to convert it to base 64 string in c++ so that I can use it in C++/cli to convert it to .net stream.
Similar way, I want to know how to convert base 64 string back to IStream.
My code uses ATL CString...
template<class T>
HRESULT StreamToBase64(IStream* pStream, T &strOut)
{
// Clear buffer
strOut.ReleaseBuffer(0);
if (!pStream)
return S_OK;
// Get size
STATSTG stat;
pStream->Stat(&stat, STATFLAG_NONAME);
unsigned iLen = stat.cbSize.LowPart;
// Seek to start
LARGE_INTEGER lPos;
lPos.QuadPart = 0;
HRESULT hr = pStream->Seek(lPos, SEEK_SET, nullptr);
if (FAILED(hr))
return hr;
// Reserve the memory
strOut.GetBuffer(Mfx::BinToBase64GetRequiredLength(iLen));
strOut.ReleaseBuffer(0);
// Use one line to decode a time. On line of Base64 code has usually 39 chars. We
// use a nearly 1kb buffer
const int iBase64LIneLen = 76/4*3;
BYTE bLine[(1024/iBase64LIneLen)*iBase64LIneLen];
ULONG bRead;
while (SUCCEEDED(hr = pStream->Read(bLine, sizeof(bLine), &bRead)) && bRead!=0)
strOut += T(Mfx::BinToBase64A(bLine, bRead));
// And seek back to start
pStream->Seek(lPos, SEEK_SET, nullptr);
return SUCCEEDED(hr) ? S_OK : hr;
}
HRESULT StreamToBase64(IStream* pStream, CStringA &strOut)
{
return StreamToBase64<CStringA>(pStream, strOut);
}
HRESULT StreamToBase64(IStream* pStream, CStringW &strOut)
{
return StreamToBase64<CStringW>(pStream, strOut);
}

How to use IAudioClient3 (WASAPI) with Real-Time Work Queue API

I'm working on a lowest-possible latency MIDI synthetizer software. I'm aware of ASIO and other alternatives, but as they have apparently made significant improvements to the WASAPI stack (in shared mode, at least), I'm curious to try it out. I first wrote a simple event-driven version of program, but as that's not the recommended way to do low-latency audio on Windows 10 (according to the docs), I'm trying to migrate to the Real-Time Work Queue API.
The documentation on Low Latency Audio states that it is recommended to use the Real-Time Work Queue API or MFCreateMFByteStreamOnStreamEx with WASAPI in order for the OS to manage work items in a way that will avoid interference from non-audio subsystems. This seems like a good idea, but the latter option seems to require some managed code (demonstrated in this WindowsAudioSession example), which I know nothing about and would preferably avoid (also the header Robytestream.h which has defs for the IRandomAccessStream isn't found on my system either).
The RTWQ example included in the docs is incomplete (doesn't compile as such), and I have made the necessary additions to make it compilable:
class my_rtqueue : IRtwqAsyncCallback {
public:
IRtwqAsyncResult* pAsyncResult;
RTWQWORKITEM_KEY workItemKey;
DWORD WorkQueueId;
STDMETHODIMP GetParameters(DWORD* pdwFlags, DWORD* pdwQueue)
{
HRESULT hr = S_OK;
*pdwFlags = 0;
*pdwQueue = WorkQueueId;
return hr;
}
//-------------------------------------------------------
STDMETHODIMP Invoke(IRtwqAsyncResult* pAsyncResult)
{
HRESULT hr = S_OK;
IUnknown* pState = NULL;
WCHAR className[20];
DWORD bufferLength = 20;
DWORD taskID = 0;
LONG priority = 0;
BYTE* pData;
hr = render_info.renderclient->GetBuffer(render_info.buffer_framecount, &pData);
ERROR_EXIT(hr);
update_buffer((unsigned short*)pData, render_info.framesize_bytes / (2*sizeof(unsigned short))); // 2 channels, sizeof(unsigned short) == 2
hr = render_info.renderclient->ReleaseBuffer(render_info.buffer_framecount, 0);
ERROR_EXIT(hr);
return S_OK;
}
STDMETHODIMP QueryInterface(const IID &riid, void **ppvObject) {
return 0;
}
ULONG AddRef() {
return 0;
}
ULONG Release() {
return 0;
}
HRESULT queue(HANDLE event) {
HRESULT hr;
hr = RtwqPutWaitingWorkItem(event, 1, this->pAsyncResult, &this->workItemKey);
return hr;
}
my_rtqueue() : workItemKey(0) {
HRESULT hr = S_OK;
IRtwqAsyncCallback* callback = NULL;
DWORD taskId = 0;
WorkQueueId = RTWQ_MULTITHREADED_WORKQUEUE;
//WorkQueueId = RTWQ_STANDARD_WORKQUEUE;
hr = RtwqLockSharedWorkQueue(L"Pro Audio", 0, &taskId, &WorkQueueId);
ERROR_THROW(hr);
hr = RtwqCreateAsyncResult(NULL, reinterpret_cast<IRtwqAsyncCallback*>(this), NULL, &pAsyncResult);
ERROR_THROW(hr);
}
int stop() {
HRESULT hr;
if (pAsyncResult)
pAsyncResult->Release();
if (0xFFFFFFFF != this->WorkQueueId) {
hr = RtwqUnlockWorkQueue(this->WorkQueueId);
if (FAILED(hr)) {
printf("Failed with RtwqUnlockWorkQueue 0x%x\n", hr);
return 0;
}
}
return 1;
}
};
And so, the actual WASAPI code (HRESULT error checking is omitted for clarity):
void thread_main(LPVOID param) {
HRESULT hr;
REFERENCE_TIME hnsRequestedDuration = 0;
IMMDeviceEnumerator* pEnumerator = NULL;
IMMDevice* pDevice = NULL;
IAudioClient3* pAudioClient = NULL;
IAudioRenderClient* pRenderClient = NULL;
WAVEFORMATEX* pwfx = NULL;
HANDLE hEvent = NULL;
HANDLE hTask = NULL;
UINT32 bufferFrameCount;
BYTE* pData;
DWORD flags = 0;
hr = RtwqStartup();
// also, hr is checked for errors every step of the way
hr = CoInitialize(NULL);
hr = CoCreateInstance(
CLSID_MMDeviceEnumerator, NULL,
CLSCTX_ALL, IID_IMMDeviceEnumerator,
(void**)&pEnumerator);
hr = pEnumerator->GetDefaultAudioEndpoint(
eRender, eConsole, &pDevice);
hr = pDevice->Activate(
IID_IAudioClient, CLSCTX_ALL,
NULL, (void**)&pAudioClient);
WAVEFORMATEX wave_format = {};
wave_format.wFormatTag = WAVE_FORMAT_PCM;
wave_format.nChannels = 2;
wave_format.nSamplesPerSec = 48000;
wave_format.nAvgBytesPerSec = 48000 * 2 * 16 / 8;
wave_format.nBlockAlign = 2 * 16 / 8;
wave_format.wBitsPerSample = 16;
UINT32 DP, FP, MINP, MAXP;
hr = pAudioClient->GetSharedModeEnginePeriod(&wave_format, &DP, &FP, &MINP, &MAXP);
printf("DefaultPeriod: %u, Fundamental period: %u, min_period: %u, max_period: %u\n", DP, FP, MINP, MAXP);
hr = pAudioClient->InitializeSharedAudioStream(AUDCLNT_STREAMFLAGS_EVENTCALLBACK, MINP, &wave_format, 0);
my_rtqueue* workqueue = NULL;
try {
workqueue = new my_rtqueue();
}
catch (...) {
hr = E_ABORT;
ERROR_EXIT(hr);
}
hr = pAudioClient->GetBufferSize(&bufferFrameCount);
PWAVEFORMATEX wf = &wave_format;
UINT32 current_period;
pAudioClient->GetCurrentSharedModeEnginePeriod(&wf, &current_period);
INT32 FrameSize_bytes = bufferFrameCount * wave_format.nChannels * wave_format.wBitsPerSample / 8;
printf("bufferFrameCount: %u, FrameSize_bytes: %d, current_period: %u\n", bufferFrameCount, FrameSize_bytes, current_period);
hr = pAudioClient->GetService(
IID_IAudioRenderClient,
(void**)&pRenderClient);
render_info.framesize_bytes = FrameSize_bytes;
render_info.buffer_framecount = bufferFrameCount;
render_info.renderclient = pRenderClient;
hEvent = CreateEvent(nullptr, false, false, nullptr);
if (hEvent == INVALID_HANDLE_VALUE) { ERROR_EXIT(0); }
hr = pAudioClient->SetEventHandle(hEvent);
const size_t num_samples = FrameSize_bytes / sizeof(unsigned short);
DWORD taskIndex = 0;
hTask = AvSetMmThreadCharacteristics(TEXT("Pro Audio"), &taskIndex);
if (hTask == NULL) {
hr = E_FAIL;
}
hr = pAudioClient->Start(); // Start playing.
running = 1;
while (running) {
workqueue->queue(hEvent);
}
workqueue->stop();
hr = RtwqShutdown();
delete workqueue;
running = 0;
return 1;
}
This seems to kind of work (ie. audio is being output), but on every other invocation of my_rtqueue::Invoke(), IAudioRenderClient::GetBuffer() returns a HRESULT of 0x88890006 (-> AUDCLNT_E_BUFFER_TOO_LARGE), and the actual audio output is certainly not what I intend it to be.
What issues are there with my code? Is this the right way to use RTWQ with WASAPI?
Turns out there were a number of issues with my code, none of which had really anything to do with Rtwq. The biggest issue was me assuming that the shared mode audio stream was using 16-bit integer samples, when in reality my audio was setup for 32-bit float format (WAVE_FORMAT_IEEE_FLOAT). The currently active shared mode format, period etc. should be fetched like this:
WAVEFORMATEX *wavefmt = NULL;
UINT32 current_period = 0;
hr = pAudioClient->GetCurrentSharedModeEnginePeriod((WAVEFORMATEX**)&wavefmt, &current_period);
wavefmt now contains the output format info of the current shared mode. If the wFormatTag field is equal to WAVE_FORMAT_EXTENSIBLE, one needs to cast WAVEFORMATEX to WAVEFORMATEXTENSIBLE to see what the actual format is. After this, one needs to fetch the minimum period supported by the current setup, like so:
UINT32 DP, FP, MINP, MAXP;
hr = pAudioClient->GetSharedModeEnginePeriod(wavefmt, &DP, &FP, &MINP, &MAXP);
and then initialize the audio stream with the new InitializeSharedAudioStream function:
hr = pAudioClient->InitializeSharedAudioStream(AUDCLNT_STREAMFLAGS_EVENTCALLBACK, MINP, wavefmt, NULL);
... get the buffer's actual size:
hr = pAudioClient->GetBufferSize(&render_info.buffer_framecount);
and use GetCurrentPadding in the Get/ReleaseBuffer logic:
UINT32 pad = 0;
hr = render_info.audioclient->GetCurrentPadding(&pad);
int actual_size = (render_info.buffer_framecount - pad);
hr = render_info.renderclient->GetBuffer(actual_size, &pData);
if (SUCCEEDED(hr)) {
update_buffer((float*)pData, actual_size);
hr = render_info.renderclient->ReleaseBuffer(actual_size, 0);
ERROR_EXIT(hr);
}
The documentation for IAudioClient::Initialize states the following about shared mode streams (I assume it also applies to the new IAudioClient3):
Each time the thread awakens, it should call IAudioClient::GetCurrentPadding to determine how much data to write to a rendering buffer or read from a capture buffer. In contrast to the two buffers that the Initialize method allocates for an exclusive-mode stream that uses event-driven buffering, a shared-mode stream requires a single buffer.
Using GetCurrentPadding solves the problem with AUDCLNT_E_BUFFER_TOO_LARGE, and feeding the buffer with 32-bit float samples instead of 16-bit integers makes the output sound fine on my system (although the effect was quite funky!).
If someone comes up with better/more correct ways to use the Rtwq API, I would love to hear them.

Serialize: CArchive a CImage

I am currently trying to figure out how to properly store a CImage file within a CArchive (JPEG). My current approach to this is the following (pseudo) code:
BOOL CPicture::Serialize(CArchive &ar)
{
IStream *pStream = NULL;
HRESULT hr;
CImage *img = GetImage();
if (ar.IsLoading())
{
HGLOBAL hMem = GlobalAlloc(GMEM_FIXED, 54262);
hr = CreateStreamOnHGlobal(hMem, FALSE, &pStream);
if(SUCCEEDED(hr))
{
ar.Read(pStream, 54262);
img->Load(pStream);
pStream->Release();
GlobalUnlock(hMem);
GlobalFree(hMem);
}
}
else
{
hr = CreateStreamOnHGlobal(0, TRUE, &pStream);
if (SUCCEEDED(hr))
{
hr = img->Save(pStream, Gdiplus::ImageFormatJPEG);
if (SUCCEEDED(hr))
ar.Write(pStream, 54262);
}
}
...
I am just now getting back into C++ and have only done a little with it in the past. Any help would be very much appreciated.
Thank you very much in advance.
I'm not an expert on IStream, but I think you may not be using it correctly. The following code seems to work. It currently archives in png format, but it can easily be changed through Gdiplus::ImageFormatPNG.
It owes a lot to the tutorial "Embracing IStream as just a stream of bytes" by S. Arman on CodeProject.
void ImageArchive(CImage *pImage, CArchive &ar)
{
HRESULT hr;
if (ar.IsStoring())
{
// create a stream
IStream *pStream = SHCreateMemStream(NULL, 0);
ASSERT(pStream != NULL);
if (pStream == NULL)
return;
// write the image to a stream rather than file (the stream in this case is just a chunk of memory automatically allocated by the stream itself)
pImage->Save(pStream, Gdiplus::ImageFormatPNG); // Note: IStream will automatically grow up as necessary.
// find the size of memory written (i.e. the image file size)
STATSTG statsg;
hr = pStream->Stat(&statsg, STATFLAG_NONAME);
ASSERT(hr == S_OK);
ASSERT(statsg.cbSize.QuadPart < ULONG_MAX);
ULONG nImgBytes = ULONG(statsg.cbSize.QuadPart); // any image that can be displayed had better not have more than MAX_ULONG bytes
// go to the start of the stream
LARGE_INTEGER seekPos;
seekPos.QuadPart = 0;
hr = pStream->Seek(seekPos, STREAM_SEEK_SET, NULL);
ASSERT(hr == S_OK);
// get data in stream into a standard byte array
char *bytes = new char[nImgBytes];
ULONG nRead;
hr = pStream->Read(bytes, nImgBytes, &nRead); // read the data from the stream into normal memory. nRead should be equal to statsg.cbSize.QuadPart.
ASSERT(hr == S_OK);
ASSERT(nImgBytes == nRead);
// write data to archive and finish
ar << nRead; // need to save the amount of memory needed for the file, since we will need to read this amount later
ar.Write(bytes, nRead); // write the data to the archive file from the stream memory
pStream->Release();
delete[] bytes;
}
else
{
// get the data from the archive
ULONG nBytes;
ar >> nBytes;
char *bytes = new char[nBytes]; // ordinary memory to hold data from archive file
UINT nBytesRead = ar.Read(bytes, nBytes); // read the data from the archive file
ASSERT(nBytesRead == UINT(nBytes));
// make the stream
IStream *pStream = SHCreateMemStream(NULL, 0);
ASSERT(pStream != NULL);
if (pStream == NULL)
return;
// put the archive data into the stream
ULONG nBytesWritten;
pStream->Write(bytes, nBytes, &nBytesWritten);
ASSERT(nBytes == nBytesWritten);
if (nBytes != nBytesWritten)
return;
// go to the start of the stream
LARGE_INTEGER seekPos;
seekPos.QuadPart = 0;
hr = pStream->Seek(seekPos, STREAM_SEEK_SET, NULL);
ASSERT(hr == S_OK);
// load the stream into CImage and finish
pImage->Load(pStream); // pass the archive data to CImage
pStream->Release();
delete[] bytes;
}
}

How to get width and height of directshow webcam video stream

I found a bit of code that gets me access to the raw pixel data from my webcam. However I need to know the image width, height, pixel format and preferably the data stride(pitch, memory padding or whatever you want to call it) if its ever gonna be something other than the width * bytes per pixel
#include <windows.h>
#include <dshow.h>
#pragma comment(lib,"Strmiids.lib")
#define DsHook(a,b,c) if (!c##_) { INT_PTR* p=b+*(INT_PTR**)a; VirtualProtect(&c##_,4,PAGE_EXECUTE_READWRITE,&no);\
*(INT_PTR*)&c##_=*p; VirtualProtect(p, 4,PAGE_EXECUTE_READWRITE,&no); *p=(INT_PTR)c; }
// Here you get image video data in buf / len. Process it before calling Receive_ because renderer dealocates it.
HRESULT ( __stdcall * Receive_ ) ( void* inst, IMediaSample *smp ) ;
HRESULT __stdcall Receive ( void* inst, IMediaSample *smp ) {
BYTE* buf; smp->GetPointer(&buf); DWORD len = smp->GetActualDataLength();
//AM_MEDIA_TYPE* info;
//smp->GetMediaType(&info);
HRESULT ret = Receive_ ( inst, smp );
return ret;
}
int WINAPI WinMain(HINSTANCE inst,HINSTANCE prev,LPSTR cmd,int show){
HRESULT hr = CoInitialize(0); MSG msg={0}; DWORD no;
IGraphBuilder* graph= 0; hr = CoCreateInstance( CLSID_FilterGraph, 0, CLSCTX_INPROC,IID_IGraphBuilder, (void **)&graph );
IMediaControl* ctrl = 0; hr = graph->QueryInterface( IID_IMediaControl, (void **)&ctrl );
ICreateDevEnum* devs = 0; hr = CoCreateInstance (CLSID_SystemDeviceEnum, 0, CLSCTX_INPROC, IID_ICreateDevEnum, (void **) &devs);
IEnumMoniker* cams = 0; hr = devs?devs->CreateClassEnumerator (CLSID_VideoInputDeviceCategory, &cams, 0):0;
IMoniker* mon = 0; hr = cams->Next (1,&mon,0); // get first found capture device (webcam?)
IBaseFilter* cam = 0; hr = mon->BindToObject(0,0,IID_IBaseFilter, (void**)&cam);
hr = graph->AddFilter(cam, L"Capture Source"); // add web cam to graph as source
IEnumPins* pins = 0; hr = cam?cam->EnumPins(&pins):0; // we need output pin to autogenerate rest of the graph
IPin* pin = 0; hr = pins?pins->Next(1,&pin, 0):0; // via graph->Render
hr = graph->Render(pin); // graph builder now builds whole filter chain including MJPG decompression on some webcams
IEnumFilters* fil = 0; hr = graph->EnumFilters(&fil); // from all newly added filters
IBaseFilter* rnd = 0; hr = fil->Next(1,&rnd,0); // we find last one (renderer)
hr = rnd->EnumPins(&pins); // because data we are intersted in are pumped to renderers input pin
hr = pins->Next(1,&pin, 0); // via Receive member of IMemInputPin interface
IMemInputPin* mem = 0; hr = pin->QueryInterface(IID_IMemInputPin,(void**)&mem);
DsHook(mem,6,Receive); // so we redirect it to our own proc to grab image data
hr = ctrl->Run();
while ( GetMessage( &msg, 0, 0, 0 ) ) {
TranslateMessage( &msg );
DispatchMessage( &msg );
}
};
Bonus points if you can tell me how get this thing not to render a window but still get me access to the image data.
That's really ugly. Please don't do that. Insert a pass-through filter like the sample grabber instead (as I replied to your other post on the same topic). Connecting the sample grabber to the null renderer gets you the bits in a clean, safe way without rendering the image.
To get the stride, you need to get the media type, either through ISampleGrabber or IPin::ConnectionMediaType. The format block will be either a VIDEOINFOHEADER or a VIDEOINFOHEADER2 (check the format GUID). The bitmapinfo header biWidth and biHeight defines the bitmap dimensions (and hence stride). If the RECT is not empty, then that defines the relevant image area within the bitmap.
I'm going to have to wash my hands now after touching this post.
I am sorry for you. When the interface was created there were probably not the best programmer to it.
// Here you get image video data in buf / len. Process it before calling Receive_ because renderer dealocates it.
BITMAPINFOHEADER bmpInfo; // current bitmap header info
int stride;
HRESULT ( __stdcall * Receive_ ) ( void* inst, IMediaSample *smp ) ;
HRESULT __stdcall Receive ( void* inst, IMediaSample *smp )
{
BYTE* buf; smp->GetPointer(&buf); DWORD len = smp->GetActualDataLength();
HRESULT ret = Receive_ ( inst, smp );
AM_MEDIA_TYPE* info;
HRESULT hr = smp->GetMediaType(&info);
if ( hr != S_OK )
{ //TODO: error }
else
{
if ( type->formattype == FORMAT_VideoInfo )
{
const VIDEOINFOHEADER * vi = reinterpret_cast<VIDEOINFOHEADER*>( type->pbFormat );
const BITMAPINFOHEADER & bmiHeader = vi->bmiHeader;
//! now the bmiHeader.biWidth contains the data stride
stride = bmiHeader.biWidth;
bmpInfo = bmiHeader;
int width = ( vi->rcTarget.right - vi->rcTarget.left );
//! replace the data stride be the actual width
if ( width != 0 )
bmpInfo.biWidth = width;
}
else
{ // unsupported format }
}
DeleteMediaType( info );
return ret;
}
Here's how to add the Null Renderer that suppresses the rendering window. Add directly after creating the IGraphBuilder*
//create null renderer and add null renderer to graph
IBaseFilter *m_pNULLRenderer; hr = CoCreateInstance(CLSID_NullRenderer, NULL, CLSCTX_INPROC_SERVER, IID_IBaseFilter, (void **)&m_pNULLRenderer);
hr = graph->AddFilter(m_pNULLRenderer, L"Null Renderer");
That dshook hack is the only elegant directshow code of which I am aware.
In my experience, the DirectShow API is a convoluted nightmare, requiring hundreds of lines of code to do even the simplest operation, and adapting a whole programming paradigm in order to access your web camera. So if this code does the job for you, as it did for me, use it and enjoy fewer lines of code to maintain.