Create IMFByteStream from byte array - c++

I am trying to adapt a method that originally took a URL from Microsoft's MediaFoundation audio playback sample to instead take a source from a const char* array. Problem is, CreateObjectFromByteStream requires an IMFByteStream, not a const char*. How can I get what I need?
// Create a media source from a byte stream
HRESULT CreateMediaSource(const byte *data, IMFMediaSource **ppSource)
{
MF_OBJECT_TYPE ObjectType = MF_OBJECT_INVALID;
IMFSourceResolver* pSourceResolver = NULL;
IUnknown* pSource = NULL;
// Create the source resolver.
HRESULT hr = MFCreateSourceResolver(&pSourceResolver);
if (FAILED(hr))
{
goto done;
}
// Use the source resolver to create the media source.
// Note: For simplicity this sample uses the synchronous method to create
// the media source. However, creating a media source can take a noticeable
// amount of time, especially for a network source. For a more responsive
// UI, use the asynchronous BeginCreateObjectFromURL method.
hr = pSourceResolver->CreateObjectFromByteStream(data,
NULL, // URL of the source.
MF_RESOLUTION_MEDIASOURCE | MF_BYTESTREAM_CONTENT_TYPE, // Create a source object.
NULL, // Optional property store.
&ObjectType, // Receives the created object type.
&pSource // Receives a pointer to the media source.
);
if (FAILED(hr))
{
goto done;
}
// Get the IMFMediaSource interface from the media source.
hr = pSource->QueryInterface(IID_PPV_ARGS(ppSource));
done:
SafeRelease(&pSourceResolver);
SafeRelease(&pSource);
return hr;
}

I found easiest way to do this to just create tempfile and write *data there. Ugly hack, but worked good enough for me and if needed it can easily replaced by custom inmemory IMFByteStream implementation.
So code would be something like this:
Byte data[] = {'a','b','c','d','e','f'};
HRESULT hr = S_OK;
hr = MFStartup(MF_VERSION);
IMFByteStream *stream = NULL;
hr = MFCreateTempFile(
MF_ACCESSMODE_READWRITE,
MF_OPENMODE_DELETE_IF_EXIST,
MF_FILEFLAGS_NONE,
&stream
);
ULONG wroteBytes = 0;
stream->Write(data, sizeof(data), &wroteBytes);
stream->SetCurrentPosition(0);
// make sure that wroteBytes is equal with data length

You can use MFCreateMFByteStreamOnStream() to create an IMFByteStream from an IStream and you can create and IStream from a byte array using SHCreateMemStream(). The documentation at the time of writing is at https://learn.microsoft.com/en-us/windows/win32/api/mfidl/nf-mfidl-mfcreatemfbytestreamonstream and https://learn.microsoft.com/en-us/windows/win32/api/shlwapi/nf-shlwapi-shcreatememstream
Here's a quick example:
// Generate a byte array
int sample_size = 0x100;
BYTE* sample_bytes = (BYTE*)malloc(sample_size)
// Create the IStream from the byte array
IStream* pstm = SHCreateMemStream(sample_bytes, sample_size);
// Create the IMFByteStream from the IStream
IMFByteStream* pibs = NULL;
MFCreateMFByteStreamOnStream(pstm, &pibs);
// Clean up time
if (pibs)
pibs->Close();
if (pstm)
pstm->Release();
if (sample_bytes)
free(sample_bytes)
Having an IStream interface but not a byte array interface seems to be a frequent occurrence in the Microsoft API. Thankfully creating an IStream is easy.

Related

WASAPI captured packets do not align

I'm trying to visualize a soundwave captured by WASAPI loopback but find that the packets I record do not form a smooth wave when put together.
My understanding of how the WASAPI capture client works is that when I call pCaptureClient->GetBuffer(&pData, &numFramesAvailable, &flags, NULL, NULL) the buffer pData is filled from the front with numFramesAvailable datapoints. Each datapoint is a float and they alternate by channel. Thus to get all available datapoints I should cast pData to a float pointer, and take the first channels * numFramesAvailable values. Once I release the buffer and call GetBuffer again it provides the next packet. I would assume that these packets would follow on from each other but it doesn't seem to be the case.
My guess is that either I'm making an incorrect assumption about the format of the audio data in pData or the capture client is either missing or overlapping frames. But have no idea how to check these.
To make the code below as brief as possible I've removed things like error status checking and cleanup.
Initialization of capture client:
const CLSID CLSID_MMDeviceEnumerator = __uuidof(MMDeviceEnumerator);
const IID IID_IMMDeviceEnumerator = __uuidof(IMMDeviceEnumerator);
const IID IID_IAudioClient = __uuidof(IAudioClient);
const IID IID_IAudioCaptureClient = __uuidof(IAudioCaptureClient);
pAudioClient = NULL;
IMMDeviceEnumerator * pDeviceEnumerator = NULL;
IMMDevice * pDeviceEndpoint = NULL;
IAudioClient *pAudioClient = NULL;
IAudioCaptureClient *pCaptureClient = NULL;
int channels;
// Initialize audio device endpoint
CoInitialize(nullptr);
CoCreateInstance(CLSID_MMDeviceEnumerator, NULL, CLSCTX_ALL, IID_IMMDeviceEnumerator, (void**)&pDeviceEnumerator );
pDeviceEnumerator ->GetDefaultAudioEndpoint(eRender, eConsole, &pDeviceEndpoint );
// init audio client
WAVEFORMATEX *pwfx = NULL;
REFERENCE_TIME hnsRequestedDuration = 10000000;
REFERENCE_TIME hnsActualDuration;
audio_device_endpoint->Activate(IID_IAudioClient, CLSCTX_ALL, NULL, (void**)&pAudioClient);
pAudioClient->GetMixFormat(&pwfx);
pAudioClient->Initialize(AUDCLNT_SHAREMODE_SHARED, AUDCLNT_STREAMFLAGS_LOOPBACK, hnsRequestedDuration, 0, pwfx, NULL);
channels = pwfx->nChannels;
pAudioClient->GetService(IID_IAudioCaptureClient, (void**)&pCaptureClient);
pAudioClient->Start(); // Start recording.
Capture of packets (note that std::mutex packet_buffer_mutex and vector<vector<float>> packet_bufferare already be defined and used by another thread to safely display the data):
UINT32 packetLength = 0;
BYTE *pData = NULL;
UINT32 numFramesAvailable;
DWORD flags;
int max_packets = 8;
std::unique_lock<std::mutex>write_guard(packet_buffer_mutex, std::defer_lock);
while (true) {
pCaptureClient->GetNextPacketSize(&packetLength);
while (packetLength != 0)
{
// Get the available data in the shared buffer.
pData = NULL;
pCaptureClient->GetBuffer(&pData, &numFramesAvailable, &flags, NULL, NULL);
if (flags & AUDCLNT_BUFFERFLAGS_SILENT)
{
pData = NULL; // Tell CopyData to write silence.
}
write_guard.lock();
if (packet_buffer.size() == max_packets) {
packet_buffer.pop_back();
}
if (pData) {
float * pfData = (float*)pData;
packet_buffer.emplace(packet_buffer.begin(), pfData, pfData + channels * numFramesAvailable);
} else {
packet_buffer.emplace(packet_buffer.begin());
}
write_guard.unlock();
hpCaptureClient->ReleaseBuffer(numFramesAvailable);
pCaptureClient->GetNextPacketSize(&packetLength);
}
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
I store the packets in a vector<vector<float>> (where each vector<float> is a packet) removing the last one and inserting the newest at the start so I can iterate over them in order.
Below is the result of a captured sinewave, plotting alternating values so it only represents a single channel. It is clear where the packets are being stitched together.
Something is playing a sine wave to Windows; you're recording the sine wave back in the audio loopback; and the sine wave you're getting back isn't really a sine wave.
You're almost certainly running into glitches. The most likely causes of glitching are:
Whatever is playing the sine wave to Windows isn't getting data to Windows in time, so the buffer is running dry.
Whatever is reading the loopback data out of Windows isn't reading the data in time, so the buffer is filling up.
Something is going wrong in between playing the sine wave to Windows and reading it back.
It is possible that more than one of these are happening.
The IAudioCaptureClient::GetBuffer call will tell you if you read the data too late. In particular it will set *pdwFlags so that the AUDCLNT_BUFFERFLAGS_DATA_DISCONTINUITY bit is set.
Looking at your code, I see you're doing the following things between the GetBuffer and the WriteBuffer:
Waiting on a lock
Sometimes doing something called "pop_back"
Doing something called "emplace"
I quote from the above-linked documentation:
Clients should avoid excessive delays between the GetBuffer call that acquires a packet and the ReleaseBuffer call that releases the packet. The implementation of the audio engine assumes that the GetBuffer call and the corresponding ReleaseBuffer call occur within the same buffer-processing period. Clients that delay releasing a packet for more than one period risk losing sample data.
In particular you should NEVER DO ANY OF THE FOLLOWING between GetBuffer and ReleaseBuffer because eventually they will cause a glitch:
Wait on a lock
Wait on any other operation
Read from or write to a file
Allocate memory
Instead, pre-allocate a bunch of memory before calling IAudioClient::Start. As each buffer arrives, write to this memory. On the side, have a regularly scheduled work item that takes written memory and writes it to disk or whatever you're doing with it.

Media Foundation: Getting a MediaSink from a SinkWriter

I'm trying to add an MP4 file sink to a Topology. When my MediaSource is already MP4, I use MFCreateMPEG4MediaSink and MF_MPEG4SINK_SPSPPS_PASSTHROUGH. When my MediaSource isn't MP4 (so raw YUV from a webcam), I want to use MFCreateSinkWriterFromURL so that I don't have to figure out MP4 headers and other complex stuff.
According to the MSDN Docs I should be able to use GetServiceForStream to get at the MediaSink, since the input type is different from the output type. However it always returns MF_E_UNSUPPORTED_SERVICE.
How can I get the underlying MediaSink out of a MediaSinkWriter?
Alternatively, how can I easily create a MP4 media sink for an arbitrary topology?
HRESULT CreateVideoFileSink(
IMFStreamDescriptor *pSourceSD, // Pointer to the stream descriptor.
LPCWSTR pFilename, // Name of file to save to.
IMFStreamSink **ppStream) // Receives a pointer to the stream sink.
HRESULT hr = S_OK;
CComPtr<IMFAttributes> pAttr;
CComPtr<IMFMediaTypeHandler> pHandler;
CComPtr<IMFMediaType> pType;
CComPtr<IMFMediaSink> pSink;
CComPtr<IMFStreamSink> pStream;
CComPtr<IMFSinkWriter> pSinkWriter;
CComPtr<IMFByteStream> pByteStream;
*ppStream = nullptr;
// Get the media type handler for the stream.
IFR(pSourceSD->GetMediaTypeHandler(&pHandler));
// Get the major media type.
GUID guidMajorType;
IFR(pHandler->GetMajorType(&guidMajorType));
IFR(MFCreateAttributes(&pAttr, 1));
IFR(pAttr->SetUINT32(MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS, TRUE));
// Create an output file
if (MFMediaType_Video == guidMajorType)
{
GUID guidSubType;
IFR(pHandler->GetCurrentMediaType(&pType));
IFR(pType->GetGUID(MF_MT_SUBTYPE, &guidSubType));
if (MFVideoFormat_H264 == guidSubType)
{
// ... use MFCreateMPEG4MediaSink
}
else
{
IFR(MFCreateSinkWriterFromURL(pFilename, nullptr, pAttr, &pSinkWriter));
DWORD streamIdx;
IFR(pSinkWriter->AddStream(pType, &streamIdx));
IFR(pSinkWriter->GetServiceForStream(MF_SINK_WRITER_MEDIASINK, GUID_NULL, IID_PPV_ARGS(&pSink)));
IFR(pSink->GetStreamSinkByIndex(streamIdx, &pStream));
}
}
else
{
// Don't use this stream
IFR(E_FAIL)
}
// Return IMFStreamSink pointer to caller.
*ppStream = pStream.Detach();
return S_OK;
}
Figured it out right after writing the question - of course. The SinkWriter doesn't have a MediaSink until you call BeginWriting.
IFR(MFCreateSinkWriterFromURL(pFilename, nullptr, pAttr, &pSinkWriter));
DWORD streamIdx;
IFR(pSinkWriter->AddStream(pType, &streamIdx));
IFR(pSinkWriter->BeginWriting()); // <<----
IFR(pSinkWriter->GetServiceForStream(MF_SINK_WRITER_MEDIASINK, GUID_NULL, IID_PPV_ARGS(&pSink)));
IFR(pSink->GetStreamSinkByIndex(streamIdx, &pStream));
(Make sure you don't let the SinkWriter get Released while you're using the StreamSink)

Media Foundation Audio/Video capturing to MPEG4FileSink produces incorrect duration

I am working on media streaming application using Media Foundation framework. I've used some samples from internet and from Anton Polinger book. Unfortunately after saving streams into mp4 file metadata of file is corrupted. It has incorrect duration (according to time of work of my PC, 30 hours for instance), wrong bitrate. After long struggling I've fixed it for single stream (video or audio) but when i try to record both audio and video this problem returns again. Something is wrong with my topology but i can't understand what and probably there are some experts here?
I get audio and video source, wrap it into IMFCollection, create aggregate source by MFCreateAggregateSource.
I create source nodes for each source in aggregate source:
Com::IMFTopologyNodePtr
TopologyBuilder::CreateSourceNode(Com::IMFStreamDescriptorPtr
streamDescriptor)
{
HRESULT hr = S_OK;
Com::IMFTopologyNodePtr pNode;
// Create the topology node, indicating that it must be a source node.
hr = MFCreateTopologyNode(MF_TOPOLOGY_SOURCESTREAM_NODE, &pNode);
THROW_ON_FAIL(hr, "Unable to create topology node for source");
// Associate the node with the source by passing in a pointer to the media source,
// and indicating that it is the source
hr = pNode->SetUnknown(MF_TOPONODE_SOURCE, _sourceDefinition->GetMediaSource());
THROW_ON_FAIL(hr, "Unable to set source as object for topology node");
// Set the node presentation descriptor attribute of the node by passing
// in a pointer to the presentation descriptor
hr = pNode->SetUnknown(MF_TOPONODE_PRESENTATION_DESCRIPTOR, _sourceDefinition->GetPresentationDescriptor());
THROW_ON_FAIL(hr, "Unable to set MF_TOPONODE_PRESENTATION_DESCRIPTOR to node");
// Set the node stream descriptor attribute by passing in a pointer to the stream
// descriptor
hr = pNode->SetUnknown(MF_TOPONODE_STREAM_DESCRIPTOR, streamDescriptor);
THROW_ON_FAIL(hr, "Unable to set MF_TOPONODE_STREAM_DESCRIPTOR to node");
return pNode;
}
After that i connect each source to transform(H264 encoder and AAC encoder) and to MPEG4FileSink:
void TopologyBuilder::CreateFileSinkOutputNode(PCWSTR filePath)
{
HRESULT hr = S_OK;
DWORD sink_count;
Com::IMFByteStreamPtr byte_stream;
Com::IMFTransformPtr transform;
LPCWSTR lpcwstrFilePath = filePath;
hr = MFCreateFile(
MF_ACCESSMODE_WRITE, MF_OPENMODE_FAIL_IF_NOT_EXIST, MF_FILEFLAGS_NONE,
lpcwstrFilePath, &byte_stream);
THROW_ON_FAIL(hr, L"Unable to create and open file");
// Video stream
Com::IMFMediaTypePtr in_mf_video_media_type = _sourceDefinition->GetCurrentVideoMediaType();
Com::IMFMediaTypePtr out_mf_media_type = CreateMediaType(MFMediaType_Video, MFVideoFormat_H264);
hr = CopyType(in_mf_video_media_type, out_mf_media_type);
THROW_ON_FAIL(hr, L"Unable to copy type parameters");
if (GetSubtype(in_mf_video_media_type) != MEDIASUBTYPE_H264)
{
transform.Attach(CreateAndInitCoderMft(MFT_CATEGORY_VIDEO_ENCODER, out_mf_media_type));
THROW_ON_NULL(transform);
}
if (transform)
{
Com::IMFMediaTypePtr transformMediaType;
hr = transform->GetOutputCurrentType(0, &transformMediaType);
THROW_ON_FAIL(hr, L"Unable to get current output type");
UINT32 pcbBlobSize = 0;
hr = transformMediaType->GetBlobSize(MF_MT_MPEG_SEQUENCE_HEADER, &pcbBlobSize);
THROW_ON_FAIL(hr, L"Unable to get blob size of MF_MT_MPEG_SEQUENCE_HEADER");
std::vector<UINT8> blob(pcbBlobSize);
hr = transformMediaType->GetBlob(MF_MT_MPEG_SEQUENCE_HEADER, &blob.front(), blob.size(), NULL);
THROW_ON_FAIL(hr, L"Unable to get blob MF_MT_MPEG_SEQUENCE_HEADER");
hr = out_mf_media_type->SetBlob(MF_MT_MPEG_SEQUENCE_HEADER, &blob.front(), blob.size());
THROW_ON_FAIL(hr, L"Unable to set blob of MF_MT_MPEG_SEQUENCE_HEADER");
}
// Audio stream
Com::IMFMediaTypePtr out_mf_audio_media_type;
Com::IMFTransformPtr transformAudio;
Com::IMFMediaTypePtr mediaTypeTmp = _sourceDefinition->GetCurrentAudioMediaType();
Com::IMFMediaTypePtr in_mf_audio_media_type;
if (mediaTypeTmp != NULL)
{
std::unique_ptr<MediaTypesFactory> factory(new MediaTypesFactory());
if (!IsMediaTypeSupportedByAacEncoder(mediaTypeTmp))
{
UINT32 channels;
hr = mediaTypeTmp->GetUINT32(MF_MT_AUDIO_NUM_CHANNELS, &channels);
THROW_ON_FAIL(hr, L"Unable to get MF_MT_AUDIO_NUM_CHANNELS fron source media type");
in_mf_audio_media_type = factory->CreatePCM(factory->DEFAULT_SAMPLE_RATE, channels);
}
else
{
in_mf_audio_media_type.Attach(mediaTypeTmp.Detach());
}
out_mf_audio_media_type = factory->CreateAAC(in_mf_audio_media_type, factory->HIGH_ENCODED_BITRATE);
GUID subType = GetSubtype(in_mf_audio_media_type);
if (GetSubtype(in_mf_audio_media_type) != MFAudioFormat_AAC)
{
// add encoder to Aac
transformAudio.Attach(CreateAndInitCoderMft(MFT_CATEGORY_AUDIO_ENCODER, out_mf_audio_media_type));
}
}
Com::IMFMediaSinkPtr pFileSink;
hr = MFCreateMPEG4MediaSink(byte_stream, out_mf_media_type, out_mf_audio_media_type, &pFileSink);
THROW_ON_FAIL(hr, L"Unable to create mpeg4 media sink");
Com::IMFTopologyNodePtr pOutputNodeVideo;
hr = MFCreateTopologyNode(MF_TOPOLOGY_OUTPUT_NODE, &pOutputNodeVideo);
THROW_ON_FAIL(hr, L"Unable to create output node");
hr = pFileSink->GetStreamSinkCount(&sink_count);
THROW_ON_FAIL(hr, L"Unable to get stream sink count from mediasink");
if (sink_count == 0)
{
THROW_ON_FAIL(E_UNEXPECTED, L"Sink count should be greater than 0");
}
Com::IMFStreamSinkPtr stream_sink_video;
hr = pFileSink->GetStreamSinkByIndex(0, &stream_sink_video);
THROW_ON_FAIL(hr, L"Unable to get stream sink by index");
hr = pOutputNodeVideo->SetObject(stream_sink_video);
THROW_ON_FAIL(hr, L"Unable to set stream sink as output node object");
hr = _pTopology->AddNode(pOutputNodeVideo);
THROW_ON_FAIL(hr, L"Unable to add file sink output node");
pOutputNodeVideo = AddEncoderIfNeed(_pTopology, transform, in_mf_video_media_type, pOutputNodeVideo);
_outVideoNodes.push_back(pOutputNodeVideo);
Com::IMFTopologyNodePtr pOutputNodeAudio;
if (in_mf_audio_media_type != NULL)
{
hr = MFCreateTopologyNode(MF_TOPOLOGY_OUTPUT_NODE, &pOutputNodeAudio);
THROW_ON_FAIL(hr, L"Unable to create output node");
Com::IMFStreamSinkPtr stream_sink_audio;
hr = pFileSink->GetStreamSinkByIndex(1, &stream_sink_audio);
THROW_ON_FAIL(hr, L"Unable to get stream sink by index");
hr = pOutputNodeAudio->SetObject(stream_sink_audio);
THROW_ON_FAIL(hr, L"Unable to set stream sink as output node object");
hr = _pTopology->AddNode(pOutputNodeAudio);
THROW_ON_FAIL(hr, L"Unable to add file sink output node");
if (transformAudio)
{
Com::IMFTopologyNodePtr outputTransformNodeAudio;
AddTransformNode(_pTopology, transformAudio, pOutputNodeAudio, &outputTransformNodeAudio);
_outAudioNode = outputTransformNodeAudio;
}
else
{
_outAudioNode = pOutputNodeAudio;
}
}
}
When output type is applied on to audio transform, it has 15 attributes instead of 8, including MF_MT_AVG_BITRATE which should be applied to video as i understand. In my case it is 192000 and it is different of MF_MT_AVG_BITRATE on video stream.
My AAC media type is creating by this method:
HRESULT MediaTypesFactory::CopyAudioTypeBasicAttributes(IMFMediaType * in_media_type, IMFMediaType * out_mf_media_type) {
HRESULT hr = S_OK;
static const GUID AUDIO_MAJORTYPE = MFMediaType_Audio;
static const GUID AUDIO_SUBTYPE = MFAudioFormat_PCM;
out_mf_media_type->SetUINT32(MF_MT_AUDIO_BITS_PER_SAMPLE, AUDIO_BITS_PER_SAMPLE);
WAVEFORMATEX *in_wfx;
UINT32 wfx_size;
MFCreateWaveFormatExFromMFMediaType(in_media_type, &in_wfx, &wfx_size);
hr = out_mf_media_type->SetUINT32(MF_MT_AUDIO_SAMPLES_PER_SECOND, in_wfx->nSamplesPerSec);
DEBUG_ON_FAIL(hr);
hr = out_mf_media_type->SetUINT32(MF_MT_AUDIO_NUM_CHANNELS, in_wfx->nChannels);
DEBUG_ON_FAIL(hr);
hr = out_mf_media_type->SetUINT32(MF_MT_AUDIO_AVG_BYTES_PER_SECOND, in_wfx->nAvgBytesPerSec);
DEBUG_ON_FAIL(hr);
hr = out_mf_media_type->SetUINT32(MF_MT_AUDIO_BLOCK_ALIGNMENT, in_wfx->nBlockAlign);
DEBUG_ON_FAIL(hr);
return hr;
}
It would be awesome if somebody can help me or explain where i am wrong.
Thanks.
In my project CaptureManager I faced with similar problem - while I have wrote code for recording live video from many web cams into the one file. After long time research of Media Foundation I found two important facts:
1. live sources - web cams and microphones do not start from 0 - according of specification samples from them should start from 0 time stamp - Live Sources - "The first sample should have a time stamp of zero." - but live sources set current system time.
2. I see from you code that you use Media Session - it is an object with IMFMediaSession interface. I think you create it from MFCreateMediaSession function. This function creates default version of session which is optimized for playing of media from file, which samples starts from 0 by default.
In my view,the main problem is that default Media Session does not check time stamp of media samples from source, because from media file they start from zero or from StartPosition. However, live sources do not start from 0 - they should, or must, but do not.
So, my advise - write class with IMFTransform which will be "Proxy" transform between source and encoder - this "Proxy" transform must fix time stamp of media samples from live source: 1. while it receive first media sample from live source, it save actual time stamp of the first media sample like reference time, and set time stamp of the first media sample to zero, all time stamps the next media samples from this live source must be subtracted by this reference time and set to time stamps of media samples.
Also, check code for calling of IMFFinalizableMediaSink.
Regards.
MP4 metadata might under some conditions be initialized incorrectly (e.g. like this), however in the scenario you described the problem is like to be the payload data and not the way you set up the pipeline in first place.
The decoders and converters are typically passing time stamps of samples through copying them from input to output, so they are not indicating a failure if something is wrong - you still have output that makes sense written into file. The sink might be having issues processing your data if you have sample time issues, very long recordings, overflow bug esp. in case of rates expressed with large numerators/denominators. Important is what sample times the sources produce.
You might want to try to record shorter recordings, also video only and audio only recording that might possibly help in identification of the stream which supplies the data leading to the problem.
Additionally, you might want to inspect the resulting MP4 file atoms/boxes to identify whether the header boxes have incorrect data or data itself is stamped incorrectly, on which track and how exactly (esp. starts okay and then does a weird gaps in the middle).

IWICBitmapDecoder::Initialize() failing

I have a byte stream pBitmap, And i need to create a decoder from it. so I tried as follows
IWICStream *piStream = NULL;
IWICBitmapDecoder *piDecoder = NULL;
//piFactory is my IWICImagingFactory
hr = piFactory->CreateStream(&piStream);
//lRawSize is bufferSize
//pBitmap is my byte buffer
hr = piStream ->InitializeFromMemory(pBitmap, lRawSize);
hr = piFactory->CreateDecoder(GUID_ContainerFormatJpeg,NULL,&piDecoder);
//HERE i got the error.
hr = piDecoder->Initialize(piStream, WICDecodeMetadataCacheOnDemand);
hr returns component not found.
What could be the problem here.
update:
I was not sure whether the bitmap source im intend to decode is jpg or not. so i can understand that pass container format as "GUID_ContainerFormatJpeg" is not right.
so i tried IWICImagingFactory::CreateDecoderFromStream
hr = piFactory->CreateDecoderFromStream(
piStream,
NULL,
WICDecodeMetadataCacheOnDemand,
&piDecoder
);
But the result was same.
and i initiate the stream from a file. which isworked fine.
hr = piStream ->InitializeFromFilename(L"C..\\test.jpg",GENERIC_READ);
So the problem should be in the initiating the stream.
I created a encoder and do some stuf and save them in to a file using writepixel(without creating a decoder)
hr = piBitmapFrame->WritePixels(
lHeight,
cbStride,
cbBufferSize,
pBitmap);
and it saves a fine image. so icould say that pBitmap surely contains image data.
What could be the problem here.
The cause of an error is in using pointers to different objects. piStreamTemp was initialized from bitmap array, but piDecoder Initialized using piStream which is empty and was not properly initialized.
In addition, here is a recommendation to avoid using method InitializeFromMemory and workaround for this has described.

how can i store xml in buffer using xmlite?

I'm trying to write xml data with XmlLite on buffer but couldn't got any api. writing a xml file works perfect but on a memory stream I can't figure
i am working on follwong link api
http://msdn.microsoft.com/en-us/library/ms752901(v=VS.85).aspx
You can use either SHCreateMemStream or CreateStreamOnHGlobal to create an memory stream. Here is the sample code for your reference:
CComPtr<IStream> spMemoryStream;
CComPtr<IXmlWriter> spWriter;
CComPtr<IXmlWriterOutput> pWriterOutput;
// Opens writeable output stream.
spMemoryStream.Attach(::SHCreateMemStream(NULL, 0));
if (spMemoryStream == NULL)
return E_OUTOFMEMORY;
// Creates the xml writer and generates the content.
CHKHR(::CreateXmlWriter(__uuidof(IXmlWriter), (void**) &spWriter, NULL));
CHKHR(::CreateXmlWriterOutputWithEncodingName(spMemoryStream,
NULL, L"utf-16", &pWriterOutput));
CHKHR(spWriter->SetOutput(pWriterOutput));
CHKHR(spWriter->SetProperty(XmlWriterProperty_Indent, TRUE));
CHKHR(spWriter->WriteStartDocument(XmlStandalone_Omit));
CHKHR(spWriter->WriteStartElement(NULL, L"root", NULL));
CHKHR(spWriter->WriteWhitespace(L"\n"));
CHKHR(spWriter->WriteCData(L"This is CDATA text."));
CHKHR(spWriter->WriteWhitespace(L"\n"));
CHKHR(spWriter->WriteEndDocument());
CHKHR(spWriter->Flush());
// Allocates enough memeory for the xml content.
STATSTG ssStreamData = {0};
CHKHR(spMemoryStream->Stat(&ssStreamData, STATFLAG_NONAME));
SIZE_T cbSize = ssStreamData.cbSize.LowPart;
LPWSTR pwszContent = (WCHAR*) new(std::nothrow) BYTE[cbSize + sizeof(WCHAR)];
if (pwszContent == NULL)
return E_OUTOFMEMORY;
// Copies the content from the stream to the buffer.
LARGE_INTEGER position;
position.QuadPart = 0;
CHKHR(spMemoryStream->Seek(position, STREAM_SEEK_SET, NULL));
SIZE_T cbRead;
CHKHR(spMemoryStream->Read(pwszContent, cbSize, &cbRead));
pwszContent[cbSize / sizeof(WCHAR)] = L'\0';
wprintf(L"%s", pwszContent);
One pretty thing about XmlLite is it works with IStream interface. It really doesn't care what exactly the stream looks like behind the scene.