ios photo/video capture/usage using c++ - c++

hi there i need some help.
i already made an android app that take a pic/video, send it to a server which do some stuff and return a string . on the android app taking the photo/video was done using the java code, the path is sent to a C++ code which with help of NDK send the data to the server and get the string back. then pass the string to java for display and delete the photo/video.
now i need to do the same on ios.
is there a way to get a photo/video from camera, and then pass the path to my c++ code that will be the same as in my android app?
thanks
(Edit)
Let me describe the question better.
i need an objective c code, that will take an image/video from the camera (i could find the tutorial camera apps of apple and many other sites) what i need now to add is a code that will get me the absolute path to the media file so that i can pass it to the C++ code)

I found this artical how to access C/C++ Lib in iOS Objective-C++
https://www.sitepoint.com/using-c-and-c-in-an-ios-app-with-objective-c/

Related

How to get mic to c++ from qml webgl?

I'm making a call software using Qt and Qml, and I need to get the microphone feed from Qml running as webgl to the C++ side, if not straight to GStreamer using server.
I already have a Qt program as the client using GStreamer to push audio stream to the server. GStreamer, of course, doesn't go to the webgl client side though. I've found, that you can get permissions to use mic/camera from Qml, but I haven't found any example actually grabbing the stream from there. I've also checked out the usage of WebRTC. It seems like it could work with Qml and I have found some examples using it with GStreamer, but I haven't been able to get the combination of WebRTC and GStreamer working even with the examples.
So the full question:
How can I get the audio from the Qml running as webgl? Is there a way within Qt or do I have to go through WebRTC? If so, is there some simpler or more beginner friendly example than the Nirbheek's gstwebrtc demos for connecting WebRTC to GStreamer?
Not the answer I wanted, but this ended up working in my case:
As the C++ side is also running another Qt GUI, Qml to be specific, I
can use WebEngineView with html and javascript to never bother the C++
implementation with GStreamer for WebRTC. So currently I'm running
PeerJS on both sides of the connection with PeerJS' signaling
server in between.
I'd have preferred to use C++ with GStreamer to connect to WebRTC, but
I can't find other easy way for me to connect browser's audio to Qt.
edit: I apologise, this answer doesn't work in the end. I have been testing the programs on a single computer, thus I didn't realize WebGL hosted Qml doesn't run WebEngineView's Javascript on the frontend, but backend instead.

How can I display my OSM tiles in Qt client application?

I am working on a map project and I have an OSM tile server which is built on Linux Debian Jessie and it uses Mapnik and mod_tiles to render tiles.
URL for a tile: http://domain/mod_tiles/Z/X/Y.png
I would like to build a client application too in C++ with Qt framework, but I really don't know how I should start it.
I found an example in Qt, but I don't know how I could change the tile server from default to my tile server.
If you know the answer or have some helps or if you know another way to solve my problem, please inform me.
According to http://doc.qt.io/qt-5/location-plugin-osm.html, there are two ways of doing this.
The simplest is to set osm.mapping.custom.host to http://domain/mod_tiles/, then you use the last of the Map.supportedMapTypes.

How to bundle/store application image data in my OpenCV iOS application?

I am new to Xcode and iOS development and I have a basic question about how to store (bundle) application data that consist of images needed by my application. My application requires a small database of images as input that I supply.
I have an Xcode project in C++ that uses OpenCV that is currently compiled and running correctly on my Mac. The application on my Mac simply reads the image data it needs from a folder on my file system that I can easily point to. I am trying to port this application to iOS using either Objective-C or Swift. I was able to write some basic Objective-C code as a wrapper to my C++/OpenCV application but I am now at the point where I need to access the iOS file system to read the images and I am not sure where to locate that data and how to configure my Xcode project to include it.
After doing some reading on this topic I see that there are several ways to store data in iOS, but I am uncertain about what approach would be appropriate and relatively easy to implement. My understanding is that all the data for my app needs to live in the application sandbox. I see plenty of examples for how to get the file path for various folders in the sandbox, but it is not clear to me how to actually configure my project to include the data (i.e., where do I put my images?). Is there something I need to configure within my Xcode project so that when I compile the application it knows about my data?
I found many posting about the iOS file system, Core data, archiving data, etc… but had a hard time locating any information about how to actually configure my project with data that I supply. Any help would be greatly appreciated.
Thank you!
If the images that you are supplying are fixed in the app (ie- not changing over time) then you simply add the images to the app itself. They are stored in the app bundle and can be accessed as follows (with sample names);
let imagePath = NSBundle.mainBundle().pathForResource("myImage", ofType:"jpg")
let splashImageURL = NSURL.fileURLWithPath(imagePath!)
anImage = NSImage(contentsOfURL:splashImageURL)
You can create a folder in the project to hold the images. This is not used in the final image path, only for convenience in the project. You can use the File menu or drag and drop image files directly into the project. Make sure to check the 'copy files' option to move the images from your source location into the project itself.

Receiving WebRTC call from a C++ native Windows application

I would like, from a native Windows application using C++, to receive video/audio data sent from a browser located in a remote location. It seems like WebRTC is the way to go for this.
Most information I find is about how to interact with the browser to write WebRTC apps, but it may case the data would be received by my C++ app. Is it correct that I would need to use the WebRTC Native Code package for this, which is described as being 'for browser developers'? Document is located here: http://www.webrtc.org/webrtc-native-code-package
And what if I want to send video/audio data that I generate (ie not directly coming from a webcam and microphone), would I be able to send it to the remote location browser?
Any sample code out there which does something like I'm trying to accomplish?
The wording in that link is a bit misleading. They intend people that are developing browsers to use the native code, and advise those that are developing "applications" in a browser to use the WebRTC API.
I have worked with their native code for over a year to develop an Android application that is capable of performing audio and / or video calls between other Android devices and to browsers. So, I a pretty sure that it is completely possible to to take their native code and create a Windows application (especially since they have example code that does that for Linux and Mac -- look at peerconnection client and peerconnection server for this). You might have to write and re-write code to get it to work on Windows.
As for as data that you generate. In the Android project that I worked with, we didn't rely on the Android device / system to provide us with video, we captured and sent that out our selves using the "LibJingle" / WebRTC libraries. So, I know that that is possible, as long as you provide the libraries with video data in the correct format. I would imagine that one would be able to do the same with audio, but we never fiddled with that, so I cannot say for sure.
And as for example code, I can only suggest Luke Weber's gitbug repositories. Although it is for Android, it might be of some help to look at how he interfaces with the two libraries. Probably the better code to look at is the peerconnection client stuff that comes in the "LibJingle" second of the native code. [edit]: That is located in /talk/examples/peerconection/client/ .
If you get lost from my use of "LibJingle", that will show you when I started working with all of this code. Sometime around July of 2013 they migrated "LibJingle" into the WebRTC "talk" folder. From everything that I have seen, they are the same thing, just with the location and named changed.

port native C++ non static binary on Android

Im quite new on Android and I have some question for all of you who are experts!
Ok, my problem...
I implemented a client-server application based on socket programming. The server encode some packets, send them to the client through a socket and the clinet decode them.
I tested the code with two linux machines and it works fine but in my experiment it is required to include another node (this will be the Android). So the server (linux machine) will encode the packets and send through socket to client1(linux machine) and client2(Android).
For this reason I want to port the native binary of my code (which is in C++) to Android.
In which way could I do this?
Please give me some help!
Really im totally stucked!
Thanks,
Zenia
when you want to port native code C/C++ to android you want to look up android ndk and jni
http://developer.android.com/sdk/ndk/index.html
http://download.oracle.com/javase/1.5.0/docs/guide/jni/spec/functions.html
There are some examples in the ndk on how to do this.
be warned that C is fully supported but C++ support apis is very limited on android (the list is in the docs of the ndk) so you might have problems porting your code.
I would recommend using directly java if you can, since working with JNI is tedious lol
how else can you port this? start learning android i did a quick check noticed it's sdk uses java you can start by looking at
http://developer.android.com/reference/java/net/Socket.html
Thanks for the reply,
I first tried to write my own code totally in java using sockets, however i had to port some optimized libraries to Android and I could figure out how to do that (i could port a simple small library but not the one that I wanted). I gave up and I right now im trying to play with jni and ndk. however i dont know if indeed i could port my binary as it is non static (like hello world). Thats why im asking. if anyone else have some experince on that please let me know. thanks a lot,
Zenia
What you should probably do is install the SDK and NDK and build the hello-jni ndk example.
Then look up how to access the android logcat output from C, and write yourself a nice little printf-like wrapper for that (probably using the vargs version of the underlying function) so you can easily generate debug output from your native code.
Then graft your native executable onto the hello-jni example code, so you'll have a java wrapper that does very little other than start things with a call to the native code. Just remember not to do much processing in the UI thread or native code called under that thread, or you will risk an application not responding timeout.
It is also possible to (ab)use the ndk's gcc to produce stand alone native executables with no java wrapper, but this is discouraged. It's hard to find a reliable place to install them on a non-rooted phone, and android's process management isn't happy about unknown native processes. In other words, that's a path that's fine for personal experiments on your own device, but a difficult and non-future-proof one for an application deployed to others.