My end goal is to make a VRidge lightweight clone to understand the OpenVR API, but I'm struggling to understand how to get my code to display something. As a starting point, instead of my phone, I want to create a window as the HMD (SFML, SDL, you name it...) and having SteamVR rendering the VR view in it.
I understand that an object implementing IServerTrackedDeviceProvider is the handler of my driver's devices, ITrackedDeviceServerDriver is the interface of the device itself, and I suspect that my "HMD" device will have to implement IVRDisplayComponent.
Aside from setting some properties and having callbacks to activate and deactivate my device, I have no clue where to get an actual frame to display on the screen. What am I missing ?
You are almost right.
A class inheriting vr::IServerTrackedDeviceProvider(I'll call it the device parent later on) is responsible for registering and maintaining the lifetime of your devices(creating them, registering them, etc.)
Classes inheriting vr::ITrackedDeviceServerDriver after they have been registered by a device parent are considered a tracked device, device type is set by the device parent on register, also in case of the HMD device GetComponent() method needs to return display components if requested, for other devices it can just return NULL
GetComponent() receives a c string with a component name and version, for example "IVRDisplayComponent_002" stored in vr::IVRDisplayComponent_Version, and if the device has a component with a matching name version pair you need to return a pointer to it, if no match is found return NULL
Also about components, in the driver example that Valve provides its done in a very lazy and bad way, DO NOT INHERIT COMPONENTS TO YOUR DEVICE CLASSES
Segment your components into separate objects that you initialize in your device and return them in GetComponent() accordingly
Now, the only thing left for your devices to be properly identified and used by SteamVR is registering them, but there is a catch, you need to specify the device type when you register it by passing one of the values from vr::ETrackedDeviceClass enum(these should be pretty self explanatory when you look at the enum)
This is not all there is to openvr driver of course, for it all to work and for SteamVR to even acknowledge your driver's existence you need to implement an HmdDriverFactory() function, its similarly to GetComponent() except you compare the input c string to a provider name version pair and in case of a device parent its vr::IServerTrackedDeviceProvider_Version, if you get a match return a pointer to the instance of your device parent or any other provider you implemented
A few notes:
HMD needs at least one display component
HMD device is very sensitive to how you submit poses to it(dont ask why, it just is)
Be prepared for the lack of documentation, the best docs that you're gonna get are code comments in openvr_driver.h, ValveSoftware/openvr issue tracker and other people working with openvr drivers (even though there are only few...)
This is not the best explanation of how openvr drivers work, so you're always welcomed to ask for more details in the comments
Related
What is the relationship between SDL_Joystick and SDL_GameController? These are the only things I know of right now:
SDL_GameController and related functions are all part of a new API introduced in SDL2.
SDL_GameController and related functions are built on top of the existing SDL_Joystick API.
(Working Draft) You can obtain an instance of SDL_Joystick by calling on the function SDL_GameControllerGetJoystick() and passing in an instance of SDL_GameController.
(Working Draft) You can obtain an instance of SDL_GameController first by calling on SDL_JoystickInstanceID() and passing in an instance of SDL_Joystick, then pass in the SDL_JoystickID to SDL_GameControllerFromInstanceID.
Even though SDL_Joystick and SDL_GameController are both interchangeable, it seems like SDL_GameController is here to replace and slowly succeed the SDL_Joystick.
Reason is, when polling for SDL_Event, the SDL_Event instance contains both the SDL_Event::jbutton and SDL_Event::cbutton structs, representing the SDL_Joystick buttons and SDL_GameController buttons, respectively. I guess I can use either one, or both, button events for the player controls.
I could be wrong here.
I would like to ask:
What are the differences between SDL_Joystick and SDL_GameController?
Is SDL_Joystick now referring to this controller?
And the same for SDL_GameController?
What are the advantages/disadvantages of using SDL_Joystick over SDL_GameController (and vice versa)?
First of all, SDL game controllers are the extension of SDL joystics (for the scope of this answer when I say "controller" or "joystick" I mean SDL's implementation, not hardware device category in general). As wiki says,
This category contains functions for handling game controllers and for
mapping joysticks to game controller semantics. This is built on top
of the existing joystick API.
If you are running your game from Steam, the game controller mapping
is automatically provided for your game.
Internally SDL uses joystic events and processes them to produce game controller events according to controller mapping. Hence one may say that joystic is lower level thing while game controller is a generalisation upon joysticks to produce more predictable/compatible (but more constrained) for games that wants gamepad-like input devices.
With game controller, you can program input for just one xbox-like controller thing, and SDL will make user's controller compatible with that (sometimes with the user's help - there are way too many different controllers, we can't possibly expect SDL to have configurations for all of them). Of course if controller is very different (or not controller at all - e.g. fly simpulation sticks, wheels, etc.), that would be problemmatic.
Basically game controller provides xbox-like buttons and axes for user side, freeing application developer from the need to support controller remapping - as remapping is done in SDL itself. For some popular controllers SDL already have builtin mappings, and for others user-defined mapping can be loaded via environment variable.
There is also a configuration tool that simplifies remapping for end user, including exporting resulting configuration to said environment variable. Steam also have builtin configuration tool, which configuration it (supposedly - I've never used that) exports to SDL - essentially making users themselves responsible for configuring their controllers.
I'm designing a component-based system and everything works well, but a critical feature is missing and that is to be able to obtain a type of component from within a class of type Object, whereof this class can be added/removed components. In the Object class there exists a vector of components thus:
vector<Component*> pComponents;
And in the Component class, a Component has to have a name. So, a component such as Drawable would be called like so:
pPlayer->addComponent(new Drawable("Drawable"));
And that's all that's required for the player to be drawable. Now this is the problem: when it comes to adding loads of components that rely on other components, is there a settlement in how components communicate with one another?
Currently, in my Game class (which is not of type Object, although I might have it derive from Object albeit not sure if that's a good design decision) I have this code in the update function:
void Game::update()
{
pPlayer->update(0);
pSpriteLoader->getSprite()->move(pPlayer->getVelocity());
}
I'm using SFML2 solely because it's easy to use for the 2D graphics. The player update function calls the components' respective update functions. And the sprite loader is also a component and it is in charge of handling the loading of sprites through textures/images that can be read from file or memory. If I were to omit this line of code then the sprite would not be able to appear moving on the screen. As you can see, it's odd that pPlayer has a getVelocity() function and it's because I haven't moved all the physics stuff into its own component. What is dreading is that once I have moved the physics stuff out of the Player class into a Physical component class, how can I get these components to communicate with each other without having to resort to the lines of code ascribed above?
My solution is to create a component manager which registers each component and any component that requires another component will have to consult with this component manager without having to do so directly. Would this be a viable solution, and how can I proceed with the logic of such a component manager then?
Well, I suppose you would start by designing a messaging system.
It appears that you want to heavily decouple code and create components as much as possible. This is fine, I suppose, but the answer to having dependencies without coupling is probably something among the lines of message passing.
Say you need an Achievement system. This is a perfect example of a system that needs to stick its hand into as many nooks and crannies as possible in order to allow for the greatest flexibility in designing achievements. But how would you be able to stick your hand into, say, the Physics, AI, and Input system all at the same time and not write spaghetti code? The answer would be to put listeners on event queues and then run them by certain criteria based on the contents of the messages.
So for each component, you might want to inherit a common message sending/receiving component, possibly with generics, in order to send data messages. For example, say you shoot a laser in a FPS game. The laser will most likely make a sound, and the laser will most likely need an animation. You will probably want to send a message to the sound system to play a certain sound, and then send a message to the physics system or such to simulate the effects of the laser.
If you're interested, I have a really, really crude library for modeling a event system based on queues, listeners, and generic messages here: https://github.com/VermillionAzure/Flexiglass You might get some insight from fellow newbie code.
I really suggest taking a look at Boost.Signals2 or Boost.Asio as well. Knowledge of signals/networking can often help when designing communication systems, even if the system is just between game components.
I've recently been working on an entity-component system in SFML and came across the same problem.
Instead of creating some sort of messaging system that allows components to communicate with each other I ended up adding 'System' objects to the mix instead. This is another popular method that is often used when implementing component systems and it's the most flexible one that I've used so far.
Entities are nothing more than collections of components
Components are POD structs and have no methods (e.g. a PositionComponent would have nothing more than X and Y values)
Systems update any entities that have the required components
For example, my 'MovementSystem' iterates through each of my entities and checks if they have a VelocityComponent and an InputComponent. If it does, it changes the velocity of the current entity according to the current key being pressed.
This removes the issue with components communicating with each other because now all you need to do is access/modify the data stored in the current entity's components.
There are a few different ways that you can work out whether or not the current entity has the required components - I'm using bitmasks. If you decide to do the same, I highly suggest you take a look at this post for a more thorough explanation of the method: https://gamedev.stackexchange.com/questions/31473/role-of-systems-in-entity-systems-architecture
To start off I'm terrible at this directshow stuff. I have almost no clue as to how it works. And I'm trying to access this "value" from a camera that's called Area of Interest x and y, at least that's what it was called in the Camera program that came with the camera. Basically it moves the the camera's view left to right or top to bottom (The camera does not physically move). Problem is I can't find how to do that in Directshow.
But, luckily, I came across a program with a source code that had access to this value using directshow. So, after looking through the code I found it and the code looked like this..
case IDC_DEVICE_SETUP:
{
if(gcap.pVCap == NULL)
break;
ISpecifyPropertyPages *pSpec;
CAUUID cauuid;
hr = gcap.pVCap->QueryInterface(IID_ISpecifyPropertyPages, (void **)&pSpec);
if(hr == S_OK)
{
hr = pSpec->GetPages(&cauuid);
hr = OleCreatePropertyFrame(ghwndApp, 30, 30, NULL, 1,
(IUnknown **)&gcap.pVCap, cauuid.cElems,
(GUID *)cauuid.pElems, 0, 0, NULL);
CoTaskMemFree(cauuid.pElems);
pSpec->Release();
}
break;
}
Problem is that this its a button and when you click on it, it creates a window with some of the properties of the camera setting that I don't need access to. Basically, there is a two problems. First, I don't need to want to create a window, I just want to access the value programmatically and second, I only want to access the specific part of the values from this property page. Is there a way to do that?
The IAMCameraControl interface seems to come nearest to what you want, but it's not exactly what you want. I can't remember that there is a standard DirectShow interface that does what you want.
The property page you see for the IBaseFilter is implemented by the driver for the filter. The driver is free to do whatever he wants with all knowledge about internal interfaces. There is no need to exhibit these interfaces to external users. If you are lucky then the camera vendor's property page is using a COM interface that the vendor is willing to document so that you can use it.
So I would ask the camera vendor if they provide an official COM interface that you could use. If they don't, you could try to reverse engineer what they do (not so easy) and hope that they don't change the interface with the next software release.
Regarding the general question given in the comments:
COM is a programming interface that defines how to create objects, how to define the interface (e.g. methods) of these objects and how to call methods on the objects.
DirectShow is based on COM. DirectShow defines several COM interfaces like IFilterGraph as a container for all devices and filters that you use. Another COM interface defined by DirectShow is IBaseFilter which is the base interface for all filters (devices, transformation filters) that you could use.
The individual COM objects are sometimes implemented by DirectShow, but device specific objects like the IBaseFilter for your capturing device are implemented by some DLL delivered by the hardware vendor.
In your case gcap.pVCap is the IBaseFilter interface for the capture device. In COM objects can implement multiple interfaces. In your code pVCap is queried (QueryInterface) if it supports the interface ISpecifyPropertyPages. If this is the case, then the OlePropertyFrame is created which displays the property page which is implemented by the camera object. Complete control goes to the camera object which is implementing the ISpecifyPropertyPages interface. When the camera object displays the property page it can directly access it's own properties. But it can also make the properties available externally by exporting another interface like IMyCameraSpecificInterface.
have been checking through the examples and api, but i cant seem to find how to initialize the device and adding it to a controller.
Controller controller; //creates an invalid controller
Device device; //creates an invalid device
there seems to be no knowledge to be found past that.
if these actually are supposed to make valid devices on the spot: then there might be a device find error, however i already have leapd and leapcontrolpanel started up and working.
You don't create Device objects yourself. The Device class describes connected devices. Get a Device object from the Controller object after the controller has "connected". (Currently only one Leap Motion device can be recognized.)
I'm trying to code up an interface to a basic ECG device. I need the form to setup the device, send the 'record' message and save the ECG data that comes back into a file (as well as report a bit on the screen). The hardware device gets sent commands, and returns data via a serial interface.
My question is about the most appropriate class structures to set up.
Option 1:
MainWindow instantiates a
Hardware Device Object that reads the ECG info realtime, creates an 'ECG File class object' and handles it all internally to the Device object.
When recording is finished, MainWindow deletes the Device object and we're done.
Option 2:
MainWindow instantiates the
Hardware Device Object that receives a whole lot of data, maintains that data as a publically accessible data structure (member) then
MainWindow would then refer to that Device Object data structure, instantiate the ECG File class object itself and write it out to a file.
Ideally I'd like to write the data out in different formats (eg. classes that specify the format?)
Sorry if the question's not that clear, I guess I'm asking whether it's appropriate for a hardware device object to also manage all its own data, or pass it back for the main window to then process itself.
I've had a go at option 1, but it's getting ugly and I'm not sure whether I've mis-designed the thing from the start.
Any/all views appreciated!
Pete
Not knowing too much about the Domain, I have designed several systems that uses devices. I have also seem some design for devices. There are many ways to design this kind of stuff, but i like to write wrappers for all devices and use an open, close, read, and write interface.
So essentially, an abstract class is created called Device. Specific devices can be designed to extend this base class. Then a builder class is created to create and initialize the specific class. From this basic class a system can be built to handle it's out input and output. I tend to design to interfaces and keep classes as simple as possible.