OpenCV where is tracking.hpp - c++

I want to use an OpenCV's implementation of the TLD tracker. Internet says that I have to include this file: opencv2/tracking.hpp (e.g. see https://github.com/Itseez/opencv_contrib/blob/master/modules/tracking/samples/tracker.cpp).
But there is no such a file.
Well, what must I do to use TrackerTLD in my C++ project?
(OpenCV 3.0.0 beta for Windows, installed from the .exe package from opencv.org)

As Floyd mentioned, to use TrackerTLD, you need to download OpenCV contrib repo. Instruction is in the link, so explaining it shouldn't be necessary.
However in my opinion using TrackerTLD from OpenCV repo is bad option - i've tested it (about a week or 2 ago) and it was terribly slow. If you are thinking about real time image processing, consider using other implementation of TLD or some other tracker. Right now i'm using this implementation and it's working really well. Note that tracking an object is quite a time consuming task so to perform a real time tracking i have to downscale every frame from 640x480 to 320x240 (Propably it would work well (and definitely faster) in even lower resolution). On the web page of author of this implementation you may find some informations about TLD algorithm (and implementation) and another tracker created by this author - CMT(Consensus-based Matching and Tracking of Keypoints). Unfortunetely i haven't test it yet so i can't tell anything about it.

Related

Grabbing frames when VideoInfoHeader2 structure

I'm working on an application that makes analysis on video files.
Being no expert on DirectShow I used simple code for the analysis
of all frames (SampleGrabber, Callback etc.).
This works fine for all media files, even when decoded with
VideoInfoHeader2 structure (although it shouldn't, as stated everywhere).
The problem is with grabbing a single frame.
For this I used IMediaDet. And this doesn't do if there's only VideoInfoHeader2, and no VideoInfoHeader.
I tried modifications of my analysis code (OneShot, Seek), but it doesn't do.
All sources in the internet concerning this are not very helpful, as they point to SDK/ DX examples that aren't accessible anymore, or they just say that modification would be "easy".
Well, maybe for an DX expert ...
(But I need to use the car, not first to build it ... ;-)
As the matter became more & more important to me my "workaround" is to recode all videos with VideoInfoHeader2, save them with VideoInfoHeader, and do the analysis/ grabbing on that.
Very resource consuming, and the opposite of smart ...
Any help appreciated.
You outlined the necessary steps which are still the easiest solution (provided that you don't give up and use Windows API; using a third party library might be easier in comparison but this is beyond the scope of this question).
Sample Grabber and IMediaDet are parts of deprecated DirectShow Editing Services, development of which was stopped long ago. If you are not satisfied with stock API, you have to use a more flexible replacement. For example, you can grab source of similar Sample Grabber sample from older DirectX or Platform SDK and extend it to support VIDEOINFOHEADER2.
IMediaDet is nothing but COM class building its own graph internally trying to decode video. It is inflexible and almost every time your building your own graph is a more reliable solution.
Microsoft's answer to this problem is - as they abandoned DirectShow development - newer API Media Foundation. However there are reasons why this "answer" is not so good: limited OS compatibility, limited support for codecs and formats, completely new API which has little in common with DirectShow and you need to redesign your application.
All together, you either have to find a Sample Grabber replacement using one of the popular and explained methods (no matter they look not so much helpful), or switch to another API or third party library. Or, another possible solution is to use a different filter/codec which is capable of decoding into VIDEOINFOHEADER formatted media type.

Trying to create a HOG implementation for detecting people

I would like to create results similar to the video found at this link. I tried the Object Detection and Localization toolkit made for the work done by Dalal and Triggs found here, and I tried trainHOG(https://github.com/DaHoC/trainHOG), a program that uses OpenCV that can be trained to detect people.
For the ODL toolkit, I had problems compiling because its requirements are now dated. The Ubuntu packages that provide the requirements for ODL (ImLib, Boost, and Blitz) are not compatible with the versions of the packages required by ODL. I actually went through a lot of effort building older versions of the required packages but hit a dead end of an error saying:
error: no matching function for call to ‘boost::program_options::validation_error::validation_error(std::basic_string)’ + argument.desc.find(*ai, false).format_name());
For trainHOG I was able to detect people but only if they were very small in the image. I also got a lot of false positives. I trained it with 1133 positive images and ~8500 negative image, all of which were 64x128 in size.
OpenCV has a API for HOG Descriptor which you can use easily.
However, HOG is very very simple to implement and it should take whole deal of time for you to implement. You can refer to this tutorial which I found to be very helpful when for understanding HOG.
If you still find problem than let me know so I can help you code it.

Is libdmtx dead, suggested replacement?

I've been using libdmtx in a project and looking to update to a newer version, but it seems the project hasn't been updated in well over a year. The last update/version was June, 2011. The Git repository shows that the last commit was August, 2011. Finally, the author's web site, which previously promoted libdmtx, Dragonfly Logic, is dead with a 404 Not Found error.
Is there another data matrix library that can meet this criteria?
Open source
Platform-neutral C/C++ (i.e. can build for Windows, POSIX environments)
Encodes/decodes data matrix
Actively maintained
Alternatively, did libdmtx move somewhere else and continue to get maintained somewhere that I'm not aware of?
I can't say that I'll never develop on libdmtx again, but it certainly wouldn't be anytime soon. I simply don't have the spare hours anymore to even keep up with the correspondence, let alone to perform any meaningful development.
So if you wish to fork it, you have my blessing. :)
Unfortunately I'm not aware of any other open source packages that do exactly the same things as libdmtx (which is why I created it in the first place), but I tried to list any similar projects I came across at http://libdmtx.sourceforge.net/resources.php
Good luck!
As libdmtx is currently unmaintained (I wouldn't say dead, as there are several users of the library) one should have to look at options.
zxing-cpp is a viable alternative. It can code and decode both DataMatrix, QR codes and barcodes. It compiles both on windows and posix, and are open source (Apache 2)
My only complaint about the zxing-cpp library is that is doesn't support dot peen generated data matrix images.
This Github project has revived libdmtx in 2016 with sporadic but ongoing activity since then: https://github.com/dmtx
(I am not affiliated with this project, just wanted to add an update to this question after finding it in a search.)

Why should cocos2d-iphone users avoid using the #2x file extension?

Cocos2d-iphone uses the -hd extension for Retina images (and other assets). The cocos2d Retina guide speaks only vaguely of "some incompatibilities" regarding #2x:
Apple uses the ”#2x” suffix, but cocos2d doesn't use that extension
because of some incompatibilities. Instead, cocos2d has its own
suffix: ”-hd”.
WARNING: It is NOT recommend to use the ”#2x” suffix. Apple treats
those images in a special way which might cause bugs in your cocos2d
application.
Great. I feel well informed.
Through a 2-year old bug report regarding #2x I got the link to a forum thread that supposedly explains the issues with #2x. However, it does not. The only hints I found in there is that there are iOS (4.0/4.1) bugs regarding #2x which I suppose are no longer relevant. It's possible that I might have missed some crucial aspect (there was some talk about caching or repeat loading issues) - the thread is very long after all.
I'd like to know what specific issues might a cocos2d developer encounter if (s)he is using the #2x suffix for images instead of -hd?
Please give concrete examples of things that might go or actually will be wrong.
This seems to be the main reason from this link: http://www.cocos2d-iphone.org/forum/topic/12026
Specifically this post by riq:
I don't know if initWithContentsOfFile was fixed, but in 4.0 it was broken and it wasn't working with #2x, ~iphone extensions.
imageNamed caches all the loaded files so it consumes much more memory than initWithContentsOfFile
Also the #2x extension does something (I don't know exactly what) but it doesn't work OK with cocos2d.
Another good point: Back when the iPhone 4 was just released with the retina display, I am sure some users of Cocos2D were using an older version of it so when the user was using the retina display on a version of Cocos2D that didn't support it, things were twice as large as they should've been. Again this is now fixed to most unless you are using a VERY early version of Cocos2D.
Overview, so it seems that the main issue was with initWithContentsOfFile from iOS 4 but they have fixed this since because I use that exact API with Cocos2D 2.0-rc2 in my app and I do not have any issues whatsoever. I use all Apple specified extensions for images and everything works jolly good! :)
It seems as if this has a historic background.
What makes using -hd graphics still worthwhile is that loading them doesn't rely on Apple functionality but is rather done in framework code. So -hd can be loaded for iPads in iPhone Simulator mode and make use of the higher resolution pictures in 2x mode.
Other than that I couldn't find any more reasons to not use #2x when I was looking into this a week ago.
In case you want all the details it is probably best to drop riq an email.

Open Source sound engine

When I started using SoundEngine (from CrashLanding and TouchFighter), I had read about a few people recommending not to use it, for it was, according to them, not stable enough. Still it was the only solution I knew of to play sounds with pitch and position control without learning C++ and OpenAL, so I ignored the warnings and went on with it.
But now I'm starting to worry. The 2.2 SDK introduced AVFoundation. Using both SoundEngine from CrashLanding (for sounds) and AVAudioPlayer (for music), I found out SoundEngine behaves strangely when the only existing AVAudioPlayer is released (all sounds stop until a new AVAudioPlayer is initiated). Around the same time as the 2.2 SDK came out, the CrashLanding sample code was mysteriously removed from the ADC site. I'm worried there are more bad surprises to come.
My question is, is anyone aware of an Open Source alternative to SoundEngine? Maybe even a C++ library that uses OpenAL?
Look at this library, but i don't know is this what you need.
The Kowalski project provides a data driven and portable sound engine that currently runs on iOS, OS X and Windows. The engine is released under the zlib license and provides positional audio, pitch control etc.
ObjectAL for iPhone
Clone it. Use it. Love it. Enjoy the freedom.
Why not just use AVFoundation? It's pretty simple to handle and nicely flexible - apart from if you need exact timing (says the Apple documentation - but I've been testing it fairly extensively and yet to find any significant practical issues) I don't see any reason for not leveraging it.
AVFoundation lacks sound placement. This makes me sad.
I’ve written a simple sound engine around OpenAL. There are no position controls (I didn’t need them), but they would be trivial to add if you find the rest to your liking. And there is also some experimental sound code in the Cocos2D engine. It has both pitch and position controls and looks quite usable.