How to use dlib in iOS Swift? - c++

I am doing a simple demo project on iOS using swift and Dlib. Let's say I already have modified code for extracting facial landmarks using dlib C++ Lib, and I have tested my code in Xcode, which is pretty good(Although the capture is a little slow).
Now, I want to use iPhone front camera to do the device test. I only know C++ and Swift at this time. How to bring them together? What's the recommended method? Do I need to put all functions I need in *.mm files, and invoke them in the Swift code?
Expect to great answers and thanks a lot!

Related

OpenCV HoughTransformation in Swift

I'm about to program a project in which I need to extract the straight lines out of an image.
The first thing that came to my mind is Hough Transformation. So I downloaded the OpenCV framework, added it to my project. Added the Bridging Header as well as the OpenCVWrapper.h and the OpenCVWrapper.mm files. But I can't find any tutorial on how to implement the hough transformation into my swift project. So what sort of files do I need to create to write the function in? And how should that function look like?
I'm happy to hear any helpful suggestion!
Kind regards
Robert
I'm also currently working on OpenCV with Swift, it indeed didn't have much tutorial about it.
About the Hound Transformation, here is the link to the documentation, hope it helps.
And this project helps a lot when I want some reference.
Good luck!

UI Integration with Unity 3D and Fusetools

I need to integrate FuseTools animation samples to unity 3d animations. Can you please tell me is it possible and what suitable method should i approach ?
Thanks
I have an interest in FuseTools too. I think one approach would be using the Foreign Code feature to wrap Unity 3D. This wouldn't be an easy task. There are some discussions regarding graphics in the Fuse forums, I would suggest to have a look before venturing further.
It can't be done.
Unity and FUSE(tools) both compiles projects to APPS ready to publish, neither one of them can generate code that can be used or embeded outside their platform.
Unity uses IL2CPP to create C++ code. Fuse(tools) uses UNO for the same purpose.
Maybe if you can unwrap both compiled projects there is a way to mix things, but at the end, it would be easier to code directly on Java or Objective-C | Swift.

Implementing OCR on IOS

Was wondering if anyone had some idea on how one would implement OCR image linking on an IOS device.
What I exactly want the app to do is scan an image using the iPhones camera, then recognise that image. When the image is recognised the app should open a link that is relative to the image.
A better example of what I am talking about is made by a company called Augment . They make a product called "Trackers" which is exactly what I would like to implement.
There are no in-build/custom SDK's that do your exact requirement.
However, you can achieve it by customizing the OpenCV library or any of Augmented reality SDK's.
Here are the links may helpful to you
OpenCV library tutorial iOS
Wikitude Augmented reality
Now there is Real-Time Recognition SDK (http://rtrsdk.com).
It is free by the way. Disclaimer: I work for ABBYY

How to use the C++ functions of OpenCV from Python?

I'm using the Python bindings of OpenCV and it's really great. However, there are functions in the C++ version that are missing from the Python bindings, BackgroundSubstractorMOG2, and a lots of feature detection algorithms for example. What would be the easiest way to call them from Python?
I hope this helps people looking for a fast and easy way.
Here is the github repo with the open C++ code I wrote for exposing code using OpenCV's Mat class with as little pain as possible. It was originally inspired by Yati Sagade's example.
[Update] This code now works for OpenCV 2.X and OpenCV 3.X. CMake and experimental support for Python 3.X are now also available.
I also found that a few months after my original utility was written, Sudeep Pillai also wrote a similar thing for C++/CMake. There seems to be OpenCV 2 and OpenCV 3 support as well. May be worth a try.
Have a look to SimpleCV. It Python base and it a warping OpenCV. Maybe you can found something there.

I'm trying to integrate a 3D model viewer into my GUI, but have not found a single library that will allow me to do this easily. Any suggestions?

I've tried with VTK, PCL and Qt (using the QVTKWidget.h), however, using CMake is incredibly inconvenient, as the second I update any one of the many libraries that my GUI uses, I have to spend at least another day trying to sort out the linker issues. Additionally, a lot of the time, a lot of information is lost from the 3D models using these libraries.
Note: I am focusing on using PLY as it holds color and geometry information in the same file, but any other format that does the same would be fine
I am currently trying to create a Meshlab plugin, but the support for this library is sparse, and I am yet to successfully compile the Meshlab source.
Any input or direction would be really appreciated. If you guys want to know anything more, please do let me know.
If it wasn't clear in the beginning, I am using Qt (C++) to create the GUI.
Use the QT OpenGL widget and write some OpenGL code to display your model.
PLY textural format is really simple and you can write a parser yourself.
Have you tried Coin3D? This is a Free implementation of OpenInventor wich was made by SGI back in the day as a C++ wrapper around OpenGL.
As for integration with Qt there is a library called SoQt (in the same site). They also have a newer library called Quarter that integrates more like a Qt component.
I've had greatest success with Coin + SoQt + Qt.