To start off I'm terrible at this directshow stuff. I have almost no clue as to how it works. And I'm trying to access this "value" from a camera that's called Area of Interest x and y, at least that's what it was called in the Camera program that came with the camera. Basically it moves the the camera's view left to right or top to bottom (The camera does not physically move). Problem is I can't find how to do that in Directshow.
But, luckily, I came across a program with a source code that had access to this value using directshow. So, after looking through the code I found it and the code looked like this..
case IDC_DEVICE_SETUP:
{
if(gcap.pVCap == NULL)
break;
ISpecifyPropertyPages *pSpec;
CAUUID cauuid;
hr = gcap.pVCap->QueryInterface(IID_ISpecifyPropertyPages, (void **)&pSpec);
if(hr == S_OK)
{
hr = pSpec->GetPages(&cauuid);
hr = OleCreatePropertyFrame(ghwndApp, 30, 30, NULL, 1,
(IUnknown **)&gcap.pVCap, cauuid.cElems,
(GUID *)cauuid.pElems, 0, 0, NULL);
CoTaskMemFree(cauuid.pElems);
pSpec->Release();
}
break;
}
Problem is that this its a button and when you click on it, it creates a window with some of the properties of the camera setting that I don't need access to. Basically, there is a two problems. First, I don't need to want to create a window, I just want to access the value programmatically and second, I only want to access the specific part of the values from this property page. Is there a way to do that?
The IAMCameraControl interface seems to come nearest to what you want, but it's not exactly what you want. I can't remember that there is a standard DirectShow interface that does what you want.
The property page you see for the IBaseFilter is implemented by the driver for the filter. The driver is free to do whatever he wants with all knowledge about internal interfaces. There is no need to exhibit these interfaces to external users. If you are lucky then the camera vendor's property page is using a COM interface that the vendor is willing to document so that you can use it.
So I would ask the camera vendor if they provide an official COM interface that you could use. If they don't, you could try to reverse engineer what they do (not so easy) and hope that they don't change the interface with the next software release.
Regarding the general question given in the comments:
COM is a programming interface that defines how to create objects, how to define the interface (e.g. methods) of these objects and how to call methods on the objects.
DirectShow is based on COM. DirectShow defines several COM interfaces like IFilterGraph as a container for all devices and filters that you use. Another COM interface defined by DirectShow is IBaseFilter which is the base interface for all filters (devices, transformation filters) that you could use.
The individual COM objects are sometimes implemented by DirectShow, but device specific objects like the IBaseFilter for your capturing device are implemented by some DLL delivered by the hardware vendor.
In your case gcap.pVCap is the IBaseFilter interface for the capture device. In COM objects can implement multiple interfaces. In your code pVCap is queried (QueryInterface) if it supports the interface ISpecifyPropertyPages. If this is the case, then the OlePropertyFrame is created which displays the property page which is implemented by the camera object. Complete control goes to the camera object which is implementing the ISpecifyPropertyPages interface. When the camera object displays the property page it can directly access it's own properties. But it can also make the properties available externally by exporting another interface like IMyCameraSpecificInterface.
Related
My end goal is to make a VRidge lightweight clone to understand the OpenVR API, but I'm struggling to understand how to get my code to display something. As a starting point, instead of my phone, I want to create a window as the HMD (SFML, SDL, you name it...) and having SteamVR rendering the VR view in it.
I understand that an object implementing IServerTrackedDeviceProvider is the handler of my driver's devices, ITrackedDeviceServerDriver is the interface of the device itself, and I suspect that my "HMD" device will have to implement IVRDisplayComponent.
Aside from setting some properties and having callbacks to activate and deactivate my device, I have no clue where to get an actual frame to display on the screen. What am I missing ?
You are almost right.
A class inheriting vr::IServerTrackedDeviceProvider(I'll call it the device parent later on) is responsible for registering and maintaining the lifetime of your devices(creating them, registering them, etc.)
Classes inheriting vr::ITrackedDeviceServerDriver after they have been registered by a device parent are considered a tracked device, device type is set by the device parent on register, also in case of the HMD device GetComponent() method needs to return display components if requested, for other devices it can just return NULL
GetComponent() receives a c string with a component name and version, for example "IVRDisplayComponent_002" stored in vr::IVRDisplayComponent_Version, and if the device has a component with a matching name version pair you need to return a pointer to it, if no match is found return NULL
Also about components, in the driver example that Valve provides its done in a very lazy and bad way, DO NOT INHERIT COMPONENTS TO YOUR DEVICE CLASSES
Segment your components into separate objects that you initialize in your device and return them in GetComponent() accordingly
Now, the only thing left for your devices to be properly identified and used by SteamVR is registering them, but there is a catch, you need to specify the device type when you register it by passing one of the values from vr::ETrackedDeviceClass enum(these should be pretty self explanatory when you look at the enum)
This is not all there is to openvr driver of course, for it all to work and for SteamVR to even acknowledge your driver's existence you need to implement an HmdDriverFactory() function, its similarly to GetComponent() except you compare the input c string to a provider name version pair and in case of a device parent its vr::IServerTrackedDeviceProvider_Version, if you get a match return a pointer to the instance of your device parent or any other provider you implemented
A few notes:
HMD needs at least one display component
HMD device is very sensitive to how you submit poses to it(dont ask why, it just is)
Be prepared for the lack of documentation, the best docs that you're gonna get are code comments in openvr_driver.h, ValveSoftware/openvr issue tracker and other people working with openvr drivers (even though there are only few...)
This is not the best explanation of how openvr drivers work, so you're always welcomed to ask for more details in the comments
What is the relationship between SDL_Joystick and SDL_GameController? These are the only things I know of right now:
SDL_GameController and related functions are all part of a new API introduced in SDL2.
SDL_GameController and related functions are built on top of the existing SDL_Joystick API.
(Working Draft) You can obtain an instance of SDL_Joystick by calling on the function SDL_GameControllerGetJoystick() and passing in an instance of SDL_GameController.
(Working Draft) You can obtain an instance of SDL_GameController first by calling on SDL_JoystickInstanceID() and passing in an instance of SDL_Joystick, then pass in the SDL_JoystickID to SDL_GameControllerFromInstanceID.
Even though SDL_Joystick and SDL_GameController are both interchangeable, it seems like SDL_GameController is here to replace and slowly succeed the SDL_Joystick.
Reason is, when polling for SDL_Event, the SDL_Event instance contains both the SDL_Event::jbutton and SDL_Event::cbutton structs, representing the SDL_Joystick buttons and SDL_GameController buttons, respectively. I guess I can use either one, or both, button events for the player controls.
I could be wrong here.
I would like to ask:
What are the differences between SDL_Joystick and SDL_GameController?
Is SDL_Joystick now referring to this controller?
And the same for SDL_GameController?
What are the advantages/disadvantages of using SDL_Joystick over SDL_GameController (and vice versa)?
First of all, SDL game controllers are the extension of SDL joystics (for the scope of this answer when I say "controller" or "joystick" I mean SDL's implementation, not hardware device category in general). As wiki says,
This category contains functions for handling game controllers and for
mapping joysticks to game controller semantics. This is built on top
of the existing joystick API.
If you are running your game from Steam, the game controller mapping
is automatically provided for your game.
Internally SDL uses joystic events and processes them to produce game controller events according to controller mapping. Hence one may say that joystic is lower level thing while game controller is a generalisation upon joysticks to produce more predictable/compatible (but more constrained) for games that wants gamepad-like input devices.
With game controller, you can program input for just one xbox-like controller thing, and SDL will make user's controller compatible with that (sometimes with the user's help - there are way too many different controllers, we can't possibly expect SDL to have configurations for all of them). Of course if controller is very different (or not controller at all - e.g. fly simpulation sticks, wheels, etc.), that would be problemmatic.
Basically game controller provides xbox-like buttons and axes for user side, freeing application developer from the need to support controller remapping - as remapping is done in SDL itself. For some popular controllers SDL already have builtin mappings, and for others user-defined mapping can be loaded via environment variable.
There is also a configuration tool that simplifies remapping for end user, including exporting resulting configuration to said environment variable. Steam also have builtin configuration tool, which configuration it (supposedly - I've never used that) exports to SDL - essentially making users themselves responsible for configuring their controllers.
I am rendering a video file with a FilterGraph consisting of a VMR9 instance. The FilterGraph is created automatically with GraphBuilder->RenderFile(). Basically my setup is described here: http://www.codeproject.com/Articles/9206/Using-the-DirectShow-Video-Mixing-Renderer-filte
The thing is: I would like to detect some video internals like FPS, duration etc. After the call to RenderFile() the video is displayed correctly with MediaControl->StopWhenReady() and plays with Run() and Pause().
To detect the frame rate I try to grab the AM_MEDIA_TYPE struct from the VMR9's input pin:
VRM->FindPin("VMR Input0", pin); // S_OK
pin->ConnectionMediaType(&mt); // VFW_E_NOT_CONNECTED
In my opinion the filter graph should be created correctly with a call to RenderFile() and therefore this pin should be connected to my input stream. Why is it not the case and what is the way to proceed in this matter?
Microsoft provides some functions (http://msdn.microsoft.com/en-us/library/windows/desktop/dd375791%28v=vs.85%29.aspx) to traverse the graph and look for specific interfaces like IID_IAMStreamConfig that would allow the access to AM_MEDIA_TYPE. But these options fail in my implementation. The only pin I can access is the above mentioned.
Thanks in advance!
You are coming from the assumption that filter and pin, interfaces of which you hold, are connected and they are exactly the objects you are interested in. This is not necessarily true and quite a few question in past showed that people incorrectly understand the topologies they create. You need to review the filter graph and ensure you have what you expect to have. See on this: How can I reverse engineer a DirectShow graph?
One you have proper pin connection, indeed you need to use ConnectionMediaType and then go through AM_MEDIA_TYPE to VIDEOINFOHEADER or VIDEOINFOHEADER2 and then AvgTimePerFrame member.
I have been tasked with fixing a bug in a medical application that, among other things, can capture snapshots from intra-oral video cameras. It uses a DirectShow SampleGrabber for this task. I must make the disclaimer that I have not worked with DirectShow so I'm trying to get up to speed as I go. I understand basically how the various components work together.
Anyway the bug itself is seemingly trivial but I can't figure out a workaround. Due to the modular nature of this system, the preview window is part of a separate graph than the one created by SampleGrabber (it's a long story but this is due to legacy code to support previous devices). When the camera is active we can take snapshots and everything is happy. When the camera is turned off, the SampleGrabber takes a dark frame but DirectShow is crashing when releasing the IAMStreamConfig interface created in the preview module (access violation). It seems like for some reason the SampleGrabber graph is somehow corrupting the graph built in the preview module. Due to the nature of this application, I cannot show any source here, but essentially here's what I want to accomplish:
I need to be able to detect if the camera is actually on or not. The problem I'm having is that when the camera is plugged in (USB), it seems to look to the system like it is on and returning a video stream, it's just that the stream contains no real data. When I check the state of the capture filter with the GetState method, it claims it is running; also when I check the video format properties it returns the correct properties. It seems to me like the button on the camera simply turns on/off the camera sensor itself, but the device is still returning a blank stream when the camera is off. Something must be different though, because it doesn't crash with the sensor is actually on and returning live video.
Does anybody have an idea of how I could determine if the stream is blank or has live video? IE, are there any exposed interfaces or methods I could call to determine this? I have looked through all of the various interfaces in MSDN's DirectShow documentation but haven't been able to find a way to do this.
If you don't want the callback function of your sample grabber be called, then you may consider adding a special transform filter between the sample grabber and the source filter (or right after the source filter), and what this transform filter does it to check whether the input sample is corrupted and block those corrupted sample. This basically requires you to implement your own Transform() function:
HRESULT CRleFilter::Transform(IMediaSample *pSource, IMediaSample *pDest)
In the filter you connected after the source filter (or the earliest filter you have access to), check the IMediaSample it receives by the receive() function:
HRESULT Receive(IMediaSample *pSample);
In case you're using ISampleGrabber, then you should set its call back function by using ISampleGrabber::SetCallback
HRESULT SetCallback(
ISampleGrabberCB *pCallback,
long WhichMethodToCallback
);
This requires you to implement a class extends ISampleGrabberCB. After that, you can check your received sample in function SampleCB
HRESULT SampleCB(
double SampleTime,
IMediaSample *pSample
);
There is no universal way to tell whether camera is connected or whether the stream is blank or not. You typically have one of the following scenarios:
you stop receiving any samples when camera is off
you receive samples with all pixels zeroed out or, fully blue picture or a sort of this
Some cameras have signal loss notification, but it's model specific as well as notification method.
So in first case you just stop having callback called. And to cover the second one you need to check the frame whether it's filled with solid color entirely. When you capture raw video (uncompressed) this is a fairly simple thing to do.
I need to create a virtual webcam that poses as a webcam, but takes as input a set of images that it plays. I have seen solutions like ManyCam, and Fake Webcam, but they all seem to one limitation or the other (resolution, max file size, fps etc.) I am working on Windows XP SP3.
I understand that I have to write a WIA interface for this task, but being a Python programmer, I have never written drivers or interfaces to devices. What are the main tasks in writing this interface ? What would the flow look like ?
You need to write DirectShow filter which is a COM server that implements an IPin, IAMStreamConfig and IKsPropertySet interfaces. For the IPin part you'd better to start by inheriting the CSourceStream class, for that you need to get the Windows SDK, having the SDK installed there would be a DirectShow Base Classes sources in samples\multimedia\directshow folder, there you'll find the CSourceStream (among many others). DllRegisterServer function of the COM server should register your filter within CLSID_VideoInputDeviceCategory category using filter mapper.
After building the COM-server, you register it with regsvr32 tool, and your virtual webcam should appear in the web cam lists.
Also check the samples\multimedia\directshow\filters\ball sample that can be improved and used as a starting point for your task.
Read this first
https://learn.microsoft.com/en-us/windows/win32/directshow/writing-source-filters
Then you can adopt
https://github.com/roman380/tmhare.mvps.org-vcam
You can work on top of this sample virtual camera.
This implements IAMStreamConfig and IKsPropertySet interfaces
This is built using CSourceStream and CSource class which implements IPin and IBaseFilter