I'm working on a media player with Media Foundation. I'm trying to use post processing with DXVA-HD. However, when I try to do a VideoProcessBltHD using the HD device, it fails with E_INVALIDARGS. What I doubt is it is not somehow working correctly with the ID39Surface I'm providing as input. I'm getting the input surface from 'IMFMediaBuffer' (which I get from reading a sample from the SourceReader).
I'm extracting the surface as follows:
CHECK_HR (hr = MFGetService( video_buffer, MR_BUFFER_SERVICE, __uuidof(IDirect3DSurface9), (void**)&pSurface) );
However, in the DXVA-HD example on MSDN, the VideoProcessBltHD works fine.
Whereas the IDirect3DSurface9 surface in the sample code is an off screen plain surface.
How do I pass 'my surface'(which has the video data) as an off screen plain surface to the video processor and the get 'blt-operation' succeed?
Any help would be appreciated.
Thanks
Mots
I would suggest installing full DirectX SDK, switch runtime library to debug mode in DirectX Control Pannel, turn full validation, break on error and run your app in debug mode. This way, you will get DirectX human readable error description.
Related
I've written a c++ program that receives a RTSP stream via gstreamer and displays this video via Qt5 in a QWidget. As the gstreamer videosink, I use a Widgetqt5glvideosink.
The problem is when I look at the received stream it has too much red-value in it. This only occurs when the vertical resolution exceeds +-576 pixels. (lower resolutions have no issue)
When I use cpu rendering (Widgetqt5videosink) instead of openGL rendering i get a correct image.
When I view the stream via gstreamer command line or via VLC it is also correct.
So it likes to be an issue when using an openGL rendered QWidget.
Is this an driver issue or something else?
Info:
Tested on Ubuntu16.04 and 17.04 for the viewer application.
Links:
https://gstreamer.freedesktop.org/data/doc/gstreamer/head/qt-gstreamer/html/qtvideosink_overview.html
I managed to fix my problem by patching two files in the source code of qt-gstreamer.
There were two wrong color matrices of the colorimetry BT709.
Patch to fix red artifact in Widgetqt5glvideosink
So I am trying to figure out how get a video feed (or screenshot feed if I must) of the Desktop using OpenGL in Windows and display that in a 3D environment. I plan to integrate this with ARToolkit to make essentially a virtual screen. The only issue is that I have tried manually getting the pixels in OpenGl, but I have been unable to properly display them in a 3D environment?
I apologize in advance that I do not have minimum runnable code, but due to all the dependencies and whatnot trying to get an ARToolkit code running would be far from minimal. How would I capture the desktop on Windows and display it in ARToolkit?
BONUS: If you can grab each desktop from the 'virtual' desktops in Windows 10, that would be an excellent bonus!
Alternative: If you know another AR library that renders differently, or allows me to achieve the same effect, I would be grateful.
There are 2 different problems here:
a) Make an augmentation that plays video
b) Stream the desktop to somewhere else
For playing video on an augmentation you basically need to have a texture that gets updated on each frame. I recall that ARToolkit for Unity has an example that plays video.However.
Streaming the desktop to the other device is a problem of its own. There are tools that do screen recording, but you probably don't want that.
It sounds to me that what you want to do it to make a VLC viewer and put that into an augmentation. If I am correct, I suggest you to start by looking at existing open source VLC viewers.
First some necessary pre-amble:
I'm using DirectX9 and have no choice I'm afraid. Dealing with legacy code. Sorry!
I'm using DirectX9 SDK June 2010.
What I'm trying to achieve is the following. I have an application which can render in both windowed mode and fullscreen using DirectX. I need to be able to play video when in fullscreen mode and having a visible transition between rendering of the application and playing video is unacceptable (i.e. kick out of fullscreen and go back in again). Thus I need to use the same DirectX device to render the application and the video.
I've achieved this by putting together a custom allocator/presenter as per the vmr9allocator example in the DirectX SDK (see e.g. C:\Program Files\Microsoft SDKs\Windows\v7.1\Samples\multimedia\directshow\vmr9\vmr9allocator). My version differs somewhat as the DirectX device used by the allocator class isn't owned by that class but is passed in from the rendering part of the application (actually from Angle GL layer to be precise, but this isn't particularly relevant for this discussion). However the setup of the filter is all the same.
Everything works fine apart from the following scenario. If the application loses focus then the fullscreen DirectX device is lost. In this situation I want to stop the video and terminate the device, enter windowed mode and create a new DirectX device for rendering in windowed mode. I can achieve this, but I seem to leave DirectShow in some unstable state. The problem manifests itself when I try to subsequently render video in windowed mode. Rather than using the DX device in this case, I do this by creating a subwindow of my main window for DirectShow to render to. So in particular for the DX rendering methodology I call
SetRenderingMode(VMR9Mode_Renderless)
and for the windowed version I call
SetRenderingMode(VMRMode_Windowless)
on the IVMRFilterConfig interface.
What I see (after the fullscreen device loss during video) is that the windowed video will not render to my manually specified window. Instead it insists on opening it's own parentless window exactly as if
SetRenderingMode(VMRMode_Windowed)
had been called. If I debug the code then I see that SetRenderingMode(VMRMode_Windowless) returns an "unknown error"!
It'll be difficult for me to get advice here on what's wrong with my code as there's a lot of it and posting it all is probably not helpful. So what I'd like to know is what the correct way to deal with loss of device during video rendering is. Maybe then I can pinpoint what's going wrong with my code. With reference to the afore mentioned DX sample, the key problem occurs in the CAllocator::PresentImage function:
HRESULT CAllocator::PresentImage(
/* [in] */ DWORD_PTR dwUserID,
/* [in] */ VMR9PresentationInfo *lpPresInfo)
{
HRESULT hr;
CAutoLock Lock(&m_ObjectLock);
if( NeedToHandleDisplayChange() )
{}
hr = PresentHelper( lpPresInfo );
if( hr == D3DERR_DEVICELOST)
{
if (m_D3DDev->TestCooperativeLevel() == D3DERR_DEVICENOTRESET)
{
DeleteSurfaces();
FAIL_RET( CreateDevice() );
HMONITOR hMonitor = m_D3D->GetAdapterMonitor( D3DADAPTER_DEFAULT );
FAIL_RET( m_lpIVMRSurfAllocNotify->ChangeD3DDevice( m_D3DDev, hMonitor ) );
}
hr = S_OK;
}
return hr;
}
This function is called every frame on a thread which is managed by DirectShow and which is started when the video is played. The example code indicates that you'd recreate the device here. Firstly this doesn't make any sense to me as you're only supposed to create/reset/TestCooperativeLevel on the window message thread for DX9, so this breaks that usage rule! Secondly, I can't actually do this anyway as the DX device is provided extraneously thus we can't reset it. However I can't find any sensible way to tell the system not to continue rendering. I can do nothing and return S_OK or some failure code but the problem persists.
So finally, the question! Does anyone know what the correct aproach is to handling this situation, i.e. on device lost just stop video!
N.B. I'm not ruling out some other problem deep in my code somewhere. But if I at least know what the correct approach to doing what I want is then I can hopefully rule in/out at least one part of the code.
I'm writing a plugin in firebreath, C++.
I don't have any experience with both, so my question may be very basic.
How do I place a JPEG image inside my plugin window?
Or at least, how do I do it in C++ simple program?
Thanks,
RRR
There are a couple of other questions that may help you better understand this:
How to write a web browser plugin for IE, Firefox and Chrome
Directx control in browser plugin
Basically you'll get a drawing model from FireBreath with the AttachedEvent. Depending on your platform, you will draw to that window using platform-specific drawing APIs. On Windows, for example, you would get the HWND from the PluginWindow (cast it to a PluginWindowWin) and then draw to that. Just make sure you stop drawing when DetachedEvent shows up.
For more information, you'll need to be a lot more specific; but follow those links and do some reading, then you'll know better what questions to ask.
FireBreath 1.5.2 was just released, btw! Good luck!
Good luck!
You can also use OpenGL to display images in plugin. You can get several tutorials to load jpeg image in OpenGL as texture. Same code can be ported into the Firebreath plugin using the already given OpenGL sample plugin for windows. Though OpenGL context creation will vary from one OS to the other. If you want to load jpeg images from web, you'll have to download image before converting it into opengl texture.
In DirectX 10 you could use the font interface provided by D3DX10. In DirectX 11 you are supposed to use DirectWrite. But it looks like DirectWrite doesn't speak natively to Direct3D? Is there something basic I'm missing? How do you draw simple text with DirectX 11?
There's a SpriteFont class in the DirectXTK library that can render text to a DirectX11 device context.
Edit 2014:
As pointed out in comment, the link to RasterTek tutorials no longer functions, here is a link to Webarchive of RasterTek provided by RadioSpace.
The second point of the original answer is no longer valid either, because its now possible to share D3D11 backbuffer with Direct2D and through it draw text with DirectWrite.
Edit 2017:
RasterTek is still running: font rendering in D3D11
I know about two options
make your own font rendering engine see Rastertek DX11 tutorial
second option regarding direct write requires sharing backbuffer between d3d11 and d3d10.1 devices and using dwrite + d2d + d3d10.1 to render gui and d3d11 device to render 3d geometry and merge it all in backbuffer see the post from DieterVW on this thread
At this moment dwrite and d2d dont accept surface created with d3d11 device for rendering. But hopefully MS will make it so soon.
I recently converted a DirectX 10 application over to DirectX 11 and came across this post while searching for the same answer. The Rastertek tutorial mentioned by Zeela is good, but I continued searching and found FW1FontWrapper. After only about 30 minutes, I just finished integrating it into my project and it appears to be working great. The download section provides both x86 and x64 packages including header, library, and DLL. I am just doing a simple text output, so I can't say a lot about the API, except that for what I did (outputting frames per second), it only took about 5 lines of code (including creating/releasing the wrapper object). Based on his samples it seems to provide a lot more options than what I'm using so far.
Use DirectWrite , it support high-quality text rendering, resolution-independent outline fonts, and full Unicode text and layout support.