My goal is to write a client for onvif ptz cameras such that I can view data (pan/tilt/camera/lens values), send control commands, and view the video. I was somewhat successful using C++/gsoap with an Axis camera. Then I tried it with a camera from a different company and it didn't work. I believe the problem is because that other camera uses a different version of "something" - I'm not sure if it is a different schema, different profile version, a different version of ONVIF, or a different version of soap.
I want to make a client that supports any ONVIF camera, or at least the vast majority of them. I don't want to have to say "Sorry, your camera is 1 year old and that protocol is no longer supported".
I was using onvifcpplib, which seems to have been abandoned for a bit and now its gitbhub project forwards to rapidonvif which looks completely different.
For almost two days now I have been researching ONVIF and trying to make heads or tails of what this will take. If I go here: https://www.onvif.org/profiles/specifications/specification-history/ I see no less than 18 different spec versions!
This version seems to affect wsdl file verions, so for example I can see there is a version 1.0 for the media wsdl here: http://www.onvif.org/ver10/media/wsdl/media.wsdl ... but there is also a version 2.0 of the same file here: http://www.onvif.org/ver20/media/wsdl/media.wsdl .
And I don't think they are backward compatible. But I cannot find one for 2.6 - so:
http://www.onvif.org/ver26/media/wsdl/media.wsdl does not exist.
And this is just one of 15 wsdl files that I need to use gsoap with.
I'm really confused on what to do. Is there an ONVIF expert out there that can help me with some of these questions?
Question 1) Is there a master list or something that tells me as a client writer which wsdl versions I must support and which ones are not backward compatible?? Trying every possible permutation of all 18 versions with all 15 wsdl files would take forever! Some of them might be backward compatible and others not - how do I know which are which?
Question 2) On top of the network interface specifications, there are different profile versions. Are some of these not backwards compatible as well?
Question 3) On top of the network interfaces specifications AND profile versions, there are multiple versions of SOAP - 1.1 and 1.2. Do I need to worry about some cameras using 1.1, or does ONVIF always use 1.2?
Question 4) How am I supposed to compile with multiple versions using gsoap? If I use wsdl2h followed by soapcpp2 for version 1.0 and 2.0 of the ptz wsdl for example, and then try to include both into the same project there will be conflicts. I don't want to say to users... sorry, but you will have to research and find out if your camera uses ONVIF version such-and-such so you have to use this other executable or plugin.
Question 5) Even if I was able to get multiple versions to compile within the same app, how will I know which version to use when connecting to a particular camera? Do I query the camera and say "Which version are you? OK, you're using this particular version and profile, so I'll use this set of commands"?
Question 6) With so many variations and versions, how on earth can one be expected to write an ONVIF client that supports most cameras without spending months to years on development? Is there any third party library or sdk that abstracts all this versioning voodoo?
Thanks for any help you can provide!
1+2) A client can support whichever set of wsdl documents it chooses to, because only additions are made and no changes to types and operations are made. If I remember correctly, this is specified in the Core document.
3) Only SOAP 1.2 is used as far as I am aware.
4) I don't have a good answer, I wrote my own C++ code generator which dealt with these problems.
5) GetServices will return the version of the service the device provides.
6) It's not that bad... I think most features can be determined one way or another. There is definitely a lot of confusion prior to version 2.0. The main issues I have found are with implementations of the devices not following the specification.
Related
(thread title) bedrock minecraft more recognizable as "Windows 10 Edition" Minecraft, is it possible to create a sub category UI menu similar to that in Java minecraft where it says "mods" but in bedrock? and to add mods you make a mods folder that adds the mods coded in C++ (which the language bedrock/windows 10 MC is coded in)?
although this has been answered by #Kosaro, I just wanna add that you can create plugins to use on a PocketMine server. this is mainly written in YAML and PHP, so its not really what your looking for exactly but it a bit more open than just the addons that Minecraft allows you to create. plugins like this are what make things like slapping a NPC on a server like Mineplex possible, or an economy system on a factions server. although this is all dependent on whether you have a pc to host the server, and if your willing to port forward to play with others
It is possible through Blocklauncher. And these mods are called native Mods. It involves disassembling a file in Minecraft apk and using the function calls to call our own function. It is actually so complex. I only found two places to learn.
Tutorials by artus9033 (I've never used these Tutorials)
Github page by byteandahalf (Note: Page 9 is WIP)
Blocklauncher also uses javascript as a bridge between C++ and Minecraft, but it only has very few functionalities compared to C++. Add-ons are also good but have very fewer functionalities than javascript.
Hopes it helps you
Edit1: Blocklauncher is dead I don't know it before. It will work for older versions.
Edit2: There is another app inspired by Blocklauncher which you may like
https://github.com/TripleCamera. I don't know how to install it, but he made it. Blocklauncher scripts will not work (I guess). Blocklauncher works for version 12 and below of Minecraft
No, Minecraft Bedrock edition (aka Windows 10 edition) does not support C++ mods. The only type of mod that Bedrock edition supports are "add-ons", which are either resource packs (which change textures, models, sounds, animations, etc) or behavior packs (which change how mobs behave).
You are able to modify the UI using resource packs, here is an example from the official wiki: https://minecraft.gamepedia.com/Tutorials/Bedrock_Edition_creator_guidelines#UI
You can find more information and tutorials on the official reference page: https://minecraft.gamepedia.com/Add-on
Yes it is possible but it would be very involved and I’m not sure if this would be what the question really pertains to. The way I’ve seen it done is by a program (modding client) injecting itself into the Minecraft process. The injection resembles a virus’s methodology somewhat and the source code for the hacking/modding program was mostly cpp. Look up horion hacked client github. The client seems to be dead now though.
My question is not about best practices for REST API URI design.
I've decided for myself, that i'm going to use the following approach:
https://theserver.com/api/v1/whatsoever
I'm much more curious about how to design the actual sourcecode in advance to easily extend the API with more versions.
Let's assume we've used a classic MVC-Framework for your favorite programming language. Our API works fine but we want to add & change functionality that is not backwards compatible. We did think about a nice URI design, but didn't think how our code should look in order to work nicely with different API versions. Crap.. What now?
Question: How should the source code for a versionable REST API look like?
Nice to have:
Not mixing up the different versions
Still best use of DRY
Don't reinvent the wheel over again
will be extended
Possible answers i can think of:
Same project - different Namespaces & Subfolders
Namespace: namespace App\Http\Controllers\v1\Users;
Folder: {root_folder}\app\Http\Controllers\v1\Users\UserLoginController.php
Different projects
Point https://theserver.com/api/v1/whatsoever to project 1
and https://theserver.com/api/v2/whatsoever to project 2
Here is my logic: - First of all we need to answer to the question "Why we need versioning?"
- If we can extend our API in way it is backward compatible, in that case we don't need versioning (All applications and services are going to use the same API and no changes are needed).
- If we can not can not provide backward compatible API in that case we need to introduce next version of our API. This will allow all applications and services to migrate smoothly to new version while to old one is working. After a time period (one year), first version can be obsoleted and stopped.
So based on the answer above I would keep API versions in separate branches in my repository. One codebase, multiple branches for each version. First branch corresponds to v1 which is stable and receives only fixes. No active development here. Second branch corresponds to v2 which has all new features.
I am noticing that device is not part of the 3.0 api ... what do I use instead?
zmq::device (ZMQ_QUEUE, clients, workers);
I found that the devices have been moved to here: https://github.com/zeromq/libzfl
It's a little confused, so here's the story.
When I inherited maintenance of 0MQ/2.x, it had a zmq_device() function, and a set of external device apps, small main programs with XML configuration.
I'd previously tried to improve and document these two layers, which people were playing with, patches refused by the maintainers. We then moved the external apps to the zdevices project, with more flexible configuration, etc. In the end these got no adoption and were abandoned. zdevices used libzfl (and XML) for its configuration. Most of libzfl got refactored into the CZMQ API (which people do use, a lot).
Sustrik then decided to remove the zmq_device call from 0MQ/3.0, which I explained the the list with the "less is more" argument. People didn't really like this since it broke a lot of existing applications, for a fairly weak reason.
So after the XS fork, I patched zmq_device back into 0MQ/3.1. The C++ API isn't part of the core library, but anyone using it is welcome to patch a device method back into that.
HTH.
AFAIK, currently there no devices available for 3.x but according to the readme
Less is More
Pre-built devices and zmq_device() removed. Should be made available
as a separate project(s).
Exactly one year ago, pieterh wrote the following on the site about the reasons for removing the devices:
It's mostly about being able to improve the device layers independently from the libzmq core. It's been hard to improve these device layers as part of libzmq core, mainly because the core API is considered sacred in ways that other stuff isn't. I.e. one does not touch a core API except between major versions. So, one does not touch devices if they are part of the core, except between major versions.
Just use C API for now:
zmq_device (ZMQ_QUEUE, clients, workers);
I wrote a program in C++/OpenGL (using Dev-C++ compiler) for my calculus 2 class. The teacher liked the program and he requested me to somehow put it online so that instead of downloading the .exe I can just run it on the web browser. Kinda like java applets run on the browser.
The question is:
How if possible, can I display a C++/OpenGL program in a web browser?
I am thinking of moving to JOGL which is a java interpretation of OpenGL but I rather stay in C++ since I am more familiar with it.
Also is there any other better and easier 3D web base API that I can consider?
There is a lot activity recently with WebGL. It is a binding for Javascript to native OpenGL ES 2.0 implementations, designed as an extension of the canvas HTML5 element.
It is supported by the nightly builds of Firefox, Safari, Chrome and Opera.
Have a look at these tutorials, based on the well known NeHe OpenGL tutorials.
Several projects based on WebGL are emerging, most notably Scenegraphs APIs.
From Indie teams: SceneJS, GLGE, SpiderGL.
From Google: the team behind O3D plugin is trying to implement a pure WebGL backend (source) for the project, so that no plugin will be necessary.
From W3C/Web3D: There is an ongoing discussion to include X3D as part of any HTML5 DOM tree, much like SVG in HTML4. The X3DOM project was born last year to support this idea. Now it is using WebGL as its render backend, and is version 1.0 since March 2010.
I'm almost sure that WebGL is the way to go in the near future. Mozilla/Google/Apple/Opera are promoting it, and if the technology works and there is sufficient customer/developer demand, maybe Microsoft will implement it on IE (let's hope that there will be no "WebDX"!).
AFAIK, there's only 3 options:
Java. it includes the whole OpenGL stack.
Google's Native Client (NaCL), essentially it's a plugin that let's you run executable x86 code. Just compile it and call it from HTML. Highly experimental, and nobody will have it already installed. Not sure if it gives you access to OpenGL libraries.
Canvas:3D. Another very experimental project. This is an accelerated 3D API accessible from JavaScript. AFAICT, it's only on experimental builds of Firefox.
I'd go for Java, if at all.
OTOH, if it's mostly vectorial works (without lots of textures and illumination/shadows), you might make it work on SVG simply by projecting your vectors from 3D to 2D. In that case, you can achieve cross-browser compatibility using SVGWeb, it's a simple JavaScript library that allows you to transparently use either the browser's native SVG support or a Flash-based SVG renderer.
Do you really have the time to rewrite it? I thought students were meant to be too busy for non-essential assignment work.
But if you really want to do it, perhaps a preview of it running as a flash movie is the easiest way. Then it's just a matter of doing that and you could provide a download link to the real application if people are interested.
Outside of Java, in-browser OpenGL is really in its infancy. Google's launched a really cool API and plugin for it though. It's called O3D:
http://code.google.com/apis/o3d/
Article about the overall initiative:
http://www.macworld.com/article/142079/2009/08/webgl.html
It's not OpenGL, but the Web3D Consortium's X3D specification may be of interest.
Another solution is to use Emscripten (a source-to-source compiler).
Emscripten supports C/C++ and OpenGL and will translate the source into html/JavaScript.
To use Emscripten you will need to use SDL as a platform abstraction layer (for getting an OpenGL context as well as loading images).
Emscripten is currently being used in Unreal Engine and will also be used in the Unity 5 engine.
Read more about the project here:
https://github.com/kripken/emscripten
Two approaches:
Switch to Java. However, your application will suffer from a loss of performance as a trade off for portability. But since Java is everywhere, this approach ensures that your code can be executed in most browsers.
Use ActiveX, which allows you to run native binary code for Microsoft Windows. This is not recommended in production because activeX is well known as a potential security hole, but since your lecturer is the one viewing it, security doesn't seem to be a big deal. This is applicable for Microsoft platform (Windows+IE) only.
I'm trying to understand how graphics card versions, OpenGL versions and the API headers work together. I've read up on the OpenGL 2.0/3.0/3.1 debacle on the OpenGL forums and elsewhere but it's still not clear how this will work out for me as a developer (new to OpenGL). (btw I'm using nVidia in this question as an example because I'm buying one of their cards, but obviously I'd like to be vendor agnostic for the software I develop).
First, there is the GPU which needs to support an OpenGL version. But e.g. nVidia have drivers for older video cards to support OpenGL 3. Does that mean that these drivers implement certain functionality in software to emulate new functionality that isn't in the hardware?
Then there are the API headers. If I decide to write an application for OpenGL 3, should I wait until Microsoft releases an updated version of the platform SDK with headers that support this version? How do you select which version of the API to use - through a preprocessor define in the code, or does updating to the latest platform SDK simply upgrade to whatever is the latest version (can't imagine this last option, but you never know...).
What about backward compatibility? If I write an application targeting OpenGL 1.2, will users who have installed the drivers for their card supporting OpenGL 3 still be able to run it or should I test the card's features/supported version in my application? Reading http://developer.nvidia.com/object/opengl_3_driver.html seems to confirm that at least for nVidia cards applications written against 1.2 will continue to work, but this also implies that other vendors may stop to support the 1.2 API. Basically that would put me (potentially) in a position in the future where the software wouldn't work with recent cards because they don't support 1.2 any more. But if I develop for OpenGL 3 or even 2 today, I may shut out users who's gpu's only support 1.2.
I don't need the fancy features in OpenGL - I hardly use any shading at all, the fixed pipeline works fine for me (my application is CAD-like). What is the best version to base a new application on, with the expectation that it will be a long-lived application with incremental updates over the years to come?
I'm probably forgetting other issues that are relevant in this context, any insights are much appreciated.
From my OpenGL experience, it seems that "targeting" a given version is just accessing the various extensions that have been added for that version. So the only reason you would "target" OpenGL version 3 is if you want to use some of the extensions that are new to version 3. If you don't really use version 3 extensions (if you're just doing basic OpenGL stuff), then you naturally aren't "targeting" version 3.
In Visual Studio, you will always link your application with opengl32.lib, and opengl32.lib doesn't change across different OpenGL versions. OpenGL instead uses wglGetProcAddress() to dynamically access OpenGL extensions/versions at run time instead of at compile time. Namely, if a given driver doesn't support an extension, then wglGetProcAddress() will return NULL at run-time when that extension's procedure is requested. So in your code you will need to implement logic that handles the NULL return case. In the simplest scenario, you could just print an error and say "this feature isn't available, so this program will behave ...". Or you can find other alternative methods for doing the same thing that doesn't use the extension. For the most part, you'll only get NULL returns from wglGetProcAddress if you application is running on old hardware/drivers that don't support the OpenGL version that added the extension you're looking for. However, in future years you'll want to keep abreast of those things that newer OpenGL versions decide to deprecate. I haven't read too much into the 3.1 spec, but apparently they're introducing a deprecation model where older technology/extensions may be deprecated, which will open the door for newer hardware/drivers to no longer support deprecated extensions, in which case wglGetProcAddress will again return NULL for those extensions. So if you put logic in for handling the NULL return on wglGetProcAddress(), you should still be fine even for deprecated extensions. It just might become necessary for you to implement better alternatives, or to make newer extensions default.
As far as the versioned API headers go, the changes to the headers are mostly just changes to allow access to new functions returned by wglGetProcAddress(). So if you include the API header for version 2, you're good to go as long as you only need the extensions for OpenGL 2. If you need to access functions/extensions that were added in version 3, then you just replace your version 2 header with the version 3 header, which just adds some additional function pointer typedefs associated with the new extensions, so that when you call wglGetProcAddress(), you can cast the return value to the right function pointer. Example:
PFNGLGENQUERIESARBPROC glGenQueriesARB = NULL;
...
glGenQueriesARB = (PFNGLGENQUERIESARBPROC)wglGetProcAddress("glGenQueriesARB");
In the above example, the typedef for PFNGLGENQUERIESARBPROC is defined in the API headers. glGenQueriesARB was added in 1.2 I believe, so I'd need at least the 1.2 API headers to get the definition of PFNGLGENQUERIESARBPROC. That's really all the headers do.
One more thing I want to mention about 3.1. Apparently with 3.1 they're deprecating a lot of OpenGL functionality that my company has used pretty ubiquitously, including display lists, the glBegin/glEnd mechanisms, and the GL_SELECT render mode. I don't know much about the details, but I don't see how they can do that without having to create a new opengl32.lib to link with, because it seems that most of that functionality is embedded into opengl32.lib, and not accessed through wglGetProcAddress. Additionally, is Microsoft going to include that new opengl32.lib in their Visual Studio versions? I don't have an answer for those questions, but I would think that, even though 3.1 deprecates it, this functionality is going to be around for a long time. If you keep linking with your current opengl32.lib, it should continue to work almost indefinitely, although you may lose hardware acceleration at some point. The vast majority of OpenGL tutorials available on the web use the glBegin/glEnd methods for drawing primitives. The same is true for GL_SELECT, although a lot of hardware no longer accelerates GL_SELECT render mode. Even opengl.org's tutorial's use the supposedly deprecated glBegin/glEnd methods. And I have yet to find a "getting started" tutorial that uses only 3.1 features, avoiding the deprecated functionality (certainly if someone knows of one, link me to it). Anyway, while it seems 3.1 has thrown away a lot of the old for all new stuff, I think the old stuff will still be around for quite a while.
Long story short, here's my advice. If your OpenGL needs are simple, just use the basic OpenGL functionality. Vertex arrays are supported on versions 1.1 - 3.1, so if you're looking for maximum lifetime and you're starting fresh, that's probably what you should use. However, my opinion is glBegin/glEnd and display lists are still going to be around for a while even though they are deprecated in version 3, so if you want to use them, I wouldn't fret too much. I would avoid GL_SELECT mode for picking in favor of an alternate method. Many of the hardware vendors have considered GL_SELECT deprecated for years now, even though it just got deprecated with version 3. In our application we get lots of issues where it doesn't work on ATI cards and integrated GMA cards. Subsequently we just implemented a picking method using occlusion queries which seems to fix the problem. So to do it right the first time, avoid GL_SELECT.
Good Luck.
As far as drivers go, I think that in some cases missing functionality it written in software. The whole point of using OpenGL (aside from the acceleration) is to write to an API and not care HOW it's implemented.
In my experience, you don't declare OpenGL versions. The function calls between API versions are non-overlapping. Be aware of the spec and if, for example, you call a 2.0 method, your application's minimum version just became 2.0. I have display applications written to target OpenGL (stupid old Sun video cards) and it works just fine on brand new nvidia 200 series cards.
I don't think there is anyway to guaranteee that an api won't change in the future; especially if you don't control it. Your application may stop working in 10 years when we're on OpenGL 6.4. Hopefully you've written a good enough application that customers will be willing to pay for an upgrade.
This fellow, MarK J. Kilgard has been publishing nVidia source code and documents on openGL since the 90's and has been doing so on behalf of one of the two biggest names in gaming hardware.
//-------------------------------------------------------------------------------------
... the notion that an OpenGL application is "wrong" to ever use immediate mode is overzealous. The OpenGL 3.0 specification has even gone so far as to mark immediate mode in OpenGL for "deprecation" (whatever that means!); such extremism is counter-productive and foolish. The right way to encourage good API usage isn't to try to deprecate or ban API usage, but rather educate developers about the right API usage for particular situations.