I have a program that prepares a movie frame using Win2d. The frame is in CanvasBitmap format.
I would like to save such frames as a movie. For this, I'm switching from CanvasBitmap to ID3D11Texture2D. I do it as below, but after saving dozens of samples, I have errors of the type 'Device lost'. Probably the reason is the release of the sample.
Is there another way to switch from CanvasBitmap to ID3D11Texture2D - so that saving the movie frames efficiently?
int AddWin2DFrame (CanvasBitmap^ frame)
{
ComPtr<ID2D1Device> nativeDevice = GetWrappedResource<ID2D1Device>(frame->Device);
ComPtr<ID2D1Bitmap1> nativeBitmap = GetWrappedResource<ID2D1Bitmap1>(frame);
ComPtr<IDXGISurface> dxgiSurface;
CHK(nativeBitmap->GetSurface(&dxgiSurface));
ComPtr<ID3D11Resource> d3dResource;
CHK(dxgiSurface.As(&d3dResource));
Microsoft::WRL::ComPtr<IMFMediaBuffer> writeBuffer;
CHK(MFCreateDXGISurfaceBuffer(__uuidof(ID3D11Texture2D), d3dResource.Get(), 0, false, &writeBuffer));
ComPtr<IMF2DBuffer> p2DBuffer;
DWORD length;
writeBuffer->QueryInterface(__uuidof(IMF2DBuffer), &p2DBuffer);
CHK(p2DBuffer->GetContiguousLength(&length));
CHK(writeBuffer->SetCurrentLength(length));
Microsoft::WRL::ComPtr<IMFSample> writeSample;
CHK(MFCreateSample(&writeSample));
CHK(writeSample->AddBuffer(writeBuffer.Get()));
CHK(writeSample->SetSampleDuration(sampleDuration));
CHK(writeSample->SetSampleTime(_hnsSampleTime));
CHK(_spSinkWriter->WriteSample(_streamIndex, writeSample.Get()));
}
The problem has already been solved. It is fixed by setting MF_SINK_WRITER_D3D_MANAGER as below:
ComPtr<IMFAttributes> spAttr;
CHK(MFCreateAttributes(&spAttr, 4));
CHK(spAttr->SetUnknown(MF_SINK_WRITER_D3D_MANAGER, deviceManager.Get()));
.........
CHK(MFCreateSinkWriterFromURL(ws1.c_str(), spByteStream.Get(), spAttr.Get(), &_spSinkWriter));
Related
I'm trying to convert a video file (.mp4) to a Dicom file.
I have succeeded to do it by storing single images (one per frame of the video) in the Dicom, but the result is a too large file, it's not good for me.
Instead I want to encapsulate the H.264 bitstream as it is stored in the video file, into the Dicom file.
I've tried to get the bytes of the file as follows:
std::ifstream inFile(file_name, std::ifstream::binary);
inFile.seekg(0, inFile.end);
std::streampos length = inFile.tellg();
inFile.seekg(0, inFile.beg);
std::vector<unsigned char> bytes(length);
inFile.read((char*)&bytes[0], length);
but I think I have missed something like encapsulating for the read bytes because the result Dicom file was a black image.
In python I would use pydicom.encaps.encapsulate function for this purpose:
https://pydicom.github.io/pydicom/dev/reference/generated/pydicom.encaps.encapsulate.html
with open(videofile, 'rb') as f:
dataset.PixelData = encapsulate([f.read()])
Is there anything in C ++ that is equivalent to the encapsulate function?
or any different way to get the encapsulated pixel data of video at one stream and not frame by frame?
This is the code of initializing the Dcmdataset, using the bytes extracted:
VideoFileStream* vfs = new VideoFileStream();
vfs->setFilename(file_name);
if (!vfs->open())
return false;
DcmDataset* dataset = new DcmDataset();
dataset->putAndInsertOFStringArray(DCM_SeriesInstanceUID, dcmGenerateUniqueIdentifier(new char[100], SITE_SERIES_UID_ROOT));
dataset->putAndInsertOFStringArray(DCM_SOPInstanceUID, dcmGenerateUniqueIdentifier(new char[100], SITE_INSTANCE_UID_ROOT));
dataset->putAndInsertOFStringArray(DCM_StudyInstanceUID, dcmGenerateUniqueIdentifier(new char[100], SITE_STUDY_UID_ROOT));
dataset->putAndInsertOFStringArray(DCM_MediaStorageSOPInstanceUID, dcmGenerateUniqueIdentifier(new char[100], SITE_UID_ROOT));
dataset->putAndInsertString(DCM_MediaStorageSOPClassUID, UID_VideoPhotographicImageStorage);
dataset->putAndInsertString(DCM_SOPClassUID, UID_VideoPhotographicImageStorage);
dataset->putAndInsertOFStringArray(DCM_TransferSyntaxUID, UID_MPEG4HighProfileLevel4_1TransferSyntax);
dataset->putAndInsertOFStringArray(DCM_PatientID, "987655");
dataset->putAndInsertOFStringArray(DCM_StudyDate, "20050509");
dataset->putAndInsertOFStringArray(DCM_Modality, "ES");
dataset->putAndInsertOFStringArray(DCM_PhotometricInterpretation, "YBR_PARTIAL_420");
dataset->putAndInsertUint16(DCM_SamplesPerPixel, 3);
dataset->putAndInsertUint16(DCM_BitsAllocated, 8);
dataset->putAndInsertUint16(DCM_BitsStored, 8);
dataset->putAndInsertUint16(DCM_HighBit, 7);
dataset->putAndInsertUint16(DCM_Rows, vfs->height());
dataset->putAndInsertUint16(DCM_Columns, vfs->width());
dataset->putAndInsertUint16(DCM_CineRate, vfs->framerate());
dataset->putAndInsertUint16(DCM_FrameTime, 1000.0 * 1 / vfs->framerate());
const Uint16* arr = new Uint16[]{ 0x18,0x00, 0x63, 0x10 };
dataset->putAndInsertUint16Array(DCM_FrameIncrementPointer, arr, 4);
dataset->putAndInsertString(DCM_NumberOfFrames, std::to_string(vfs->numFrames()).c_str());
dataset->putAndInsertOFStringArray(DCM_FrameOfReferenceUID, dcmGenerateUniqueIdentifier(new char[100], SITE_UID_ROOT));
dataset->putAndInsertUint16(DCM_PixelRepresentation, 0);
dataset->putAndInsertUint16(DCM_PlanarConfiguration, 0);
dataset->putAndInsertOFStringArray(DCM_ImageType, "ORIGINAL");
dataset->putAndInsertOFStringArray(DCM_LossyImageCompression, "01");
dataset->putAndInsertOFStringArray(DCM_LossyImageCompressionMethod, "ISO_14496_10");
dataset->putAndInsertUint16(DCM_LossyImageCompressionRatio, 30);
dataset->putAndInsertUint8Array(DCM_PixelData, (const Uint8 *)bytes.data(), length);
DJ_RPLossy repParam;
dataset->chooseRepresentation(EXS_MPEG4HighProfileLevel4_1, &repParam);
dataset->updateOriginalXfer();
DcmFileFormat fileformat(dataset);
OFCondition status = fileformat.saveFile("C://temp//videoTest", EXS_LittleEndianExplicit);
The trick is to redirect the value of the attribute PixelData to a file stream. With this, the video is loaded in chunks and on demand (i.e. when the attribute is accessed).
But you have to create the whole structure explicitly, that is:
The Pixel Data element
The Pixel Sequence with...
...the offset table
...a single item containing the contents of the MPEG file
Code
// set length to the size of the video file
DcmInputFileStream dcmFileStream(videofile.c_str(), 0);
DcmPixelSequence* pixelSequence = new DcmPixelSequence(DCM_PixelSequenceTag));
DcmPixelItem* offsetTable = new DcmPixelItem(DCM_PixelItemTag);
pixelSequence->insert(offsetTable);
DcmPixelItem* frame = new DcmPixelItem(DCM_PixelItemTag);
frame->createValueFromTempFile(dcmFileStream.newFactory(), OFstatic_cast(Uint32, length), EBO_LittleEndian);
pixelSequence->insert(frame);
DcmPixelData* pixelData = new DcmPixeldata(DCM_PixelData);
pixelData->putOriginalRepresentation(EXS_MPEG4HighProfileLevel4_1, nullptr, pixelSequence);
dataset->insert(pixelData, true);
DcmFileFormat fileformat(dataset);
OFCondition status = fileformat.saveFile("C://temp//videoTest");
Note that you "destroy" the compression if you save the file in VR Implicit Little Endian.
As mentioned above and obvious in the code, the whole MPEG file is wrapped into a single item in the PixelData. This is DICOM conformant but you may want to encapsulate single frames each in one item.
Note : No error handling presented here
I've started to learn DirectX12 and i try to make some kind of simple engine.
I follow the Frank D. Luna "Introduction to 3D programming with DirectX12" and i have got some problems.
First during creating swapChain, filling description like this:
DXGI_SWAP_CHAIN_DESC swapChainDesc;
swapChainDesc.BufferDesc.Width = Core::displayWidth;
swapChainDesc.BufferDesc.Height = Core::displayHeight;
swapChainDesc.BufferDesc.RefreshRate.Numerator = 60;
swapChainDesc.BufferDesc.RefreshRate.Denominator = 1;
swapChainDesc.BufferDesc.Format = Core::pixelDefinitionFormat;
swapChainDesc.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
swapChainDesc.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
swapChainDesc.SampleDesc.Count = Core::multiSamplingLevel ? 4 : 1;
swapChainDesc.SampleDesc.Quality = Core::multiSamplingEnabled ? (Core::multiSamplingLevel - 1) : 0;
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.BufferCount = (INT) Core::buffering;
swapChainDesc.OutputWindow = Core::mainWindow;
swapChainDesc.Windowed = true;
swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_DISCARD;
swapChainDesc.Flags = DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH;
// Note: Swap chain uses queue to perform flush.
ThrowIfFailed(Core::factory->CreateSwapChain(
Core::commandQueue.Get(),
&swapChainDesc,
Core::swapChain.GetAddressOf()
));
I recive "bad parameter" error. I've already found solution on MSDN but i want to know what i'am doing wrong.
Second question is why have i:
D3D12 ERROR: ID3D12GraphicsCommandList::*: A single command list cannot write to multiple buffers within a particular swapchain. [ STATE_SETTING ERROR #904: COMMAND_LIST_MULTIPLE_SWAPCHAIN_BUFFER_REFERENCES]
Durning execution of this clearing screen code fragment:
void Renderer::drawSomething() {
// Reuse the memory associated with command recording.
// We can only reset when the associated command lists have finished
// execution on the GPU.
ThrowIfFailed(Core::commandAllocator->Reset());
// A command list can be reset after it has been added to the
// command queue via ExecuteCommandList. Reusing the command list reuses memory.
ThrowIfFailed(Core::commandList->Reset(Core::commandAllocator.Get(), NULL));
// Set the viewport and scissor rect. This needs to be reset
// whenever the command list is reset.
Core::commandList->RSSetViewports(1, &Core::viewport);
Core::commandList->RSSetScissorRects(1, &Core::scissorsRectangle);
// Indicate a state transition on the resource usage.
Core::commandList->ResourceBarrier(
1,
&CD3DX12_RESOURCE_BARRIER::Transition(
Core::getCurrentBackBuffer().Get(),
D3D12_RESOURCE_STATES::D3D12_RESOURCE_STATE_PRESENT,
D3D12_RESOURCE_STATES::D3D12_RESOURCE_STATE_RENDER_TARGET
)
);
// Specify the buffers we are going to render to.
Core::commandList->OMSetRenderTargets(
1,
&Core::getCurrentBackBufferView(),
true,
&Core::getDSVHeapStartDescriptorHandle()
);
// Clear the back buffer and depth buffer.
Core::commandList->ClearRenderTargetView(
Core::getCurrentBackBufferView(),
DirectX::Colors::LightSteelBlue,
0,
NULL
);
Core::commandList->ClearDepthStencilView(
Core::getDSVHeapStartDescriptorHandle(),
D3D12_CLEAR_FLAGS::D3D12_CLEAR_FLAG_DEPTH | D3D12_CLEAR_FLAGS::D3D12_CLEAR_FLAG_STENCIL,
1.0f,
0,
0,
NULL
);
//// Indicate a state transition on the resource usage.
Core::commandList->ResourceBarrier(
1,
&CD3DX12_RESOURCE_BARRIER::Transition(
Core::getCurrentBackBuffer().Get(),
D3D12_RESOURCE_STATES::D3D12_RESOURCE_STATE_RENDER_TARGET,
D3D12_RESOURCE_STATES::D3D12_RESOURCE_STATE_PRESENT
)
);
// Done recording commands.
ThrowIfFailed(Core::commandList->Close());
// Add the command list to the queue for execution.
ID3D12CommandList* cmdsLists[] = {Core::commandList.Get()};
Core::commandQueue->ExecuteCommandLists(_countof(cmdsLists), cmdsLists);
// swap the back and front buffers
ThrowIfFailed(Core::swapChain->Present(0, 0));
UINT buffering = Core::buffering;
Core::currentBackBuffer = (Core::currentBackBuffer + 1) % buffering;
Core::flushCommandQueue();
}
To not making big mess in this post, i won't place all code here, but if you would like to look how does it look like, or it would be important in this case, my whole repository is here:
repository
It's very small and simlple, almost all code is placed in Core class.
Thank you in advance!
Edit:
I found solution to second question.
Problem was in this loop:
void Core::createSwapChainBuffersIntoRTVHeap() {
for (UINT i = 0; i < Core::buffering; i++) {
CD3DX12_CPU_DESCRIPTOR_HANDLE rtvHeapHandle(rtvDescriptorHeap->GetCPUDescriptorHandleForHeapStart());
ErrorUtils::messageAndExitIfFailed(
swapChain->GetBuffer(i, IID_PPV_ARGS(&swapChainBackBuffers[i])),
L"B³¹d pobierania backBuffera!",
GET_SWAPCHAIN_BACK_BUFFER_ERROR
);
device->CreateRenderTargetView(swapChainBackBuffers[i].Get(), NULL, rtvHeapHandle);
//Zapamiêtuje offset, to jest sterta po prostu zwyk³a
rtvHeapHandle.Offset(1, rtvDescriptorSize);
}
}
I did only one move to made this code look like this:
void Core::createSwapChainBuffersIntoRTVHeap() {
CD3DX12_CPU_DESCRIPTOR_HANDLE rtvHeapHandle(rtvDescriptorHeap->GetCPUDescriptorHandleForHeapStart());
for (UINT i = 0; i < Core::buffering; i++) {
ErrorUtils::messageAndExitIfFailed(
swapChain->GetBuffer(i, IID_PPV_ARGS(&swapChainBackBuffers[i])),
L"B³¹d pobierania backBuffera!",
GET_SWAPCHAIN_BACK_BUFFER_ERROR
);
device->CreateRenderTargetView(swapChainBackBuffers[i].Get(), NULL, rtvHeapHandle);
//Zapamiêtuje offset, to jest sterta po prostu zwyk³a
rtvHeapHandle.Offset(1, rtvDescriptorSize);
}
}
After that. When the commandList closing gone right, i've got AccessViolationException in D3D12.dll on:
ThrowIfFailed(Core::swapChain->Present(0, 0));
Which after few hours of internet research i fixed by forcing WARP on this application using "dxcpl.exe".
I assume that was because i work on laptop with HD4000 and Nvidia as second card, but i'm not sure.
This will not help you to fix the problem at once, but will give you much more information. DirectX12 wants you to push all commands into commandlists and it will only report an error when you call Close() on it. Note that you can not "resurrect" the commandlist that failed on Close() by resetting it, the best thing you can do is to delete it and then create a new commandlist. In this case you however may not update referenced resources. Don't really recommend you doing that.
What you indeed can do is to use the ID3D12InfoQueue interface to make your program break on D3D12 errors if the debugger is attached.
Get it from your device:
ID3D12InfoQueue* InfoQueue = nullptr;
Core::device->QueryInterface(IID_PPV_ARGS(&InfoQueue));
Enable "break on severity":
InfoQueue->SetBreakOnSeverity(D3D12_MESSAGE_SEVERITY_ERROR, true);
InfoQueue->SetBreakOnSeverity(D3D12_MESSAGE_SEVERITY_CORRUPTION, true);
InfoQueue->SetBreakOnSeverity(D3D12_MESSAGE_SEVERITY_WARNING, false);
And let it go:
InfoQueue->Release();
You can also set your InfoQueue to whitelist or blacklist sets of D3D12 error ids. Your error ID is D3D12_MESSAGE_ID_COMMAND_LIST_MULTIPLE_SWAPCHAIN_BUFFER_REFERENCES (code 904).
Hope that this will help someone to deal with this graphics API.
I want to use the DCMTK 3.6.1 library in an existing project that can create DICOM image. I want to use this library because I want to make the compression of the DICOM images. In a new solution (Visual Studio 2013/C++) Following the example in the DCMTK official documentation, I have this code, that works properly.
using namespace std;
int main()
{
DJEncoderRegistration::registerCodecs();
DcmFileFormat fileformat;
/**** MONO FILE ******/
if (fileformat.loadFile("Files/test.dcm").good())
{
DcmDataset *dataset = fileformat.getDataset();
DcmItem *metaInfo = fileformat.getMetaInfo();
DJ_RPLossless params; // codec parameters, we use the defaults
// this causes the lossless JPEG version of the dataset
//to be created EXS_JPEGProcess14SV1
dataset->chooseRepresentation(EXS_JPEGProcess14SV1, ¶ms);
// check if everything went well
if (dataset->canWriteXfer(EXS_JPEGProcess14SV1))
{
// force the meta-header UIDs to be re-generated when storing the file
// since the UIDs in the data set may have changed
delete metaInfo->remove(DCM_MediaStorageSOPClassUID);
delete metaInfo->remove(DCM_MediaStorageSOPInstanceUID);
metaInfo->putAndInsertString(DCM_ImplementationVersionName, "New Implementation Version Name");
//delete metaInfo->remove(DCM_ImplementationVersionName);
//dataset->remove(DCM_ImplementationVersionName);
// store in lossless JPEG format
fileformat.saveFile("Files/carrellata_esami_compresso.dcm", EXS_JPEGProcess14SV1);
}
}
DJEncoderRegistration::cleanup();
return 0;
}
Now I want to use the same code in an existing C++ application where
if (infoDicom.arrayImgDicom.GetSize() != 0) //Things of existing previous code
{
//I have added here the registration
DJEncoderRegistration::registerCodecs(); // register JPEG codecs
DcmFileFormat fileformat;
DcmDataset *dataset = fileformat.getDataset();
DJ_RPLossless params;
dataset->putAndInsertUint16(DCM_Rows, infoDicom.rows);
dataset->putAndInsertUint16(DCM_Columns, infoDicom.columns,);
dataset->putAndInsertUint16(DCM_BitsStored, infoDicom.m_bitstor);
dataset->putAndInsertUint16(DCM_HighBit, infoDicom.highbit);
dataset->putAndInsertUint16(DCM_PixelRepresentation, infoDicom.pixelrapresentation);
dataset->putAndInsertUint16(DCM_RescaleIntercept, infoDicom.rescaleintercept);
dataset->putAndInsertString(DCM_PhotometricInterpretation,"MONOCHROME2");
dataset->putAndInsertString(DCM_PixelSpacing, "0.086\\0.086");
dataset->putAndInsertString(DCM_ImagerPixelSpacing, "0.096\\0.096");
BYTE* pData = new BYTE[sizeBuffer];
LPBYTE pSorg;
for (int nf=0; nf<iNumberFrames; nf++)
{
//this contains all the PixelData and I put it into the dataset
pSorg = (BYTE*)infoDicom.arrayImgDicom.GetAt(nf);
dataset->putAndInsertUint8Array(DCM_PixelData, pSorg, sizeBuffer);
dataset->chooseRepresentation(EXS_JPEGProcess14SV1, ¶ms);
//and I put it in my data set
//but this IF return false so che canWriteXfer fails...
if (dataset->canWriteXfer(EXS_JPEGProcess14SV1))
{
dataset->remove(DCM_MediaStorageSOPClassUID);
dataset->remove(DCM_MediaStorageSOPInstanceUID);
}
//the saveFile fails too, and the error is "Pixel
//rappresentation non found" but I have set the Pixel rep with
//dataset->putAndInsertUint16(DCM_PixelRepresentation, infoDicom.pixelrapresentation);
OFCondition status = fileformat.saveFile("test1.dcm", EXS_JPEGProcess14SV1);
DJEncoderRegistration::cleanup();
if (status.bad())
{
int error = 0; //only for test
}
thefile.Write(pSorg, sizeBuffer); //previous code
}
Actually I made test with image that have on one frame, so the for cycle is done only one time. I don't understand why if I choose dataset->chooseRepresentation(EXS_LittleEndianImplicit, ¶ms); or dataset->chooseRepresentation(EXS_LittleEndianEXplicit, ¶ms); works perfectly but not when I choose dataset->chooseRepresentation(EXS_JPEGProcess14SV1, ¶ms);
If I use the same image in the first application, I can compress the image without problems...
EDIT: I think the main problem to solve is the status = dataset->chooseRepresentation(EXS_JPEGProcess14SV1, &rp_lossless) that return "Tag not found". How can I know wich tag is missed?
EDIT2: As suggest in the DCMTK forum I have added the tag about the Bits Allocated and now works for few images, but non for all. For some images I have again "Tag not found": how can I know wich one of tags is missing? As a rule it's better insert all the tags?
I solve the problem adding the tags DCM_BitsAllocated and DCM_PlanarConfiguration. This are the tags that are missed. I hope that is useful for someone.
At least you should call the function chooseRepresentation, after you have applied the data.
**dataset->putAndInsertUint8Array(DCM_PixelData, pSorg, sizeBuffer);**
dataset->chooseRepresentation(EXS_JPEGProcess14SV1, ¶ms);
I am currently working on passing image as Base64 to a rest service. But I am not able to read the file. Here is my code:
if (requestCode == iPictureCode && resultCode == RESULT_OK) {
String picturePath = data.getStringExtra(Intents.EXTRA_PICTURE_FILE_PATH);
String image = processPictureWhenReady(picturePath);
}
`private String processPictureWhenReady(final String picturePath) {
final File pictureFile = new File(picturePath);
if (pictureFile.exists()) {
// The picture is ready; process it.
Bitmap bitmap = BitmapFactory.decodeFile(pictureFile.getAbsolutePath());
bitmap = CameraUtils.resizeImageToHalf(bitmap);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 85, stream);
String base64Image = Base64.encodeToString(stream.toByteArray(), Base64.DEFAULT);
return base64Image;
}
}`
it never enters the if block for 'pictureFile.exists()'
What could be the problem?
It looks like you adapted your code from the example in the developer docs, but you removed the else clause where a FileObserver is used to detect when the image is actually available. You need that part as well because the full-sized image may not be ready immediately when the activity returns.
I need specifically to load a JPG image that was saved as a blob. GDI+ makes it very easy to retrieve images from files but not from databases...
Take a look at Image::Image(IStream *, BOOL). This takes a pointer to a COM object implementing the IStream interface. You can get one of these by allocating some global memory with GlobalAlloc and then calling CreateStreamOnHGlobal on the returned handle. It'll look something like this:
shared_ptr<Image> CreateImage(BYTE *blob, size_t blobSize)
{
HGLOBAL hMem = ::GlobalAlloc(GMEM_MOVEABLE,blobSize);
BYTE *pImage = (BYTE*)::GlobalLock(hMem);
for (size_t iBlob = 0; iBlob < blobSize; ++iBlob)
pImage[iBlob] = blob[iBlob];
::GlobalUnlock(hMem);
CComPtr<IStream> spStream;
HRESULT hr = ::CreateStreamOnHGlobal(hMem,TRUE,&spStream);
shared_ptr<Image> image = new Image(spStream);
return image;
}
But with error checking and such (omitted here to make things clearer)
First fetch your blog into a byte array then use something like this:
public static Image CreateImage(byte[] pict)
{
System.Drawing.Image img = null;
using (System.IO.MemoryStream stream = new System.IO.MemoryStream(pict)) {
img = System.Drawing.Image.FromStream(stream);
}
return img;
}