Chrome Native Messaging Error when Communicating - c++

I am trying to create an Extension for Google Chrome ,in which I want to process some images.
The extension was previously created using NPAPI ,but that being phased out need to switch to another alternative, Native Messaging looked like the best suited for this job.
The native host is written in C++ and it reads from stdin a formated message sent from extension(some like {action:"name_of_action",buffer:"x0x0",length:"4"} ),parse it ,extract the buffer and do some processing with the image,after that i need to return a message to the extension.
The problem I am facing is that after a few messages(the number is not the same every time) ,the port used disconnects and in the javascript console the message is : Error when communicating with the native messaging host..
My application basicly does this:
while(true)
{
/*read until I reach a delimiter*/
while(true){
c = getchar();
buffer[i] = c;
if(i>len && memcmp(buffer+i-len+1,delimiter,len)==0)
break;
i++;
}
ProcessMessage(buffer);
}
I am sending image buffers from the extension(base64 encoded) ,decode them and process that buffer in the app.I also have tried (on windows) using UrlDownloadToFile function to download that image from C++ ,but that seems to fail ,ending up in the previous message Error when communicating with the native messaging host.Does anybody know why doesn't chrome allow downloading a file from the messaging host executable?

If you just want to do image processing in native code then you probably don't need to use Native Messaging. You can most likely use NaCl, or PNaCl, which produces OS-neutral executables that can be run safely withing Chrome.
To communicate with your NaCl module you can PostMessage too and from your extension's JavaScript code. You can even send dictionary object directly and decompose them in native code using the dictionary interface.
Native Message should only be needed when you need to access OS functionality not exposed by PPAPI, or when you need to load/run a pre-compiled code (e.g. load a windows DLL).

Related

Play AudioStream of WebRTC C++ Application with Audio Device

I wrote two command line applications in C++ which use WebRTC:
Client creates a PeerConnection and opens an AudioStream
Server receives and plays the AudioStream
The basic implementation works: They exchange SDP-Offer and -Answer, find their external IPs using ICE, a PeerConnection and a PeerConnectionFactory with the corresponding constraints are created, etc. I added a hook on the server side to RtpReceiverImpl::IncomingRtpPacket which writes the received payload to a file. The file contains valid PCM audio. Therefore, I assume the client streams data successfully through the network to the server application.
On the server side, my callback PeerConnectionObserver::OnAddStream is called and receives a MediaStreamInterface. Furthermore, i can iterate with my DeviceManagerInterface::GetAudioOutputDevices through my audio devices. So basically, everything is fine.
What is missing: I need some kind of glue to tell WebRTC to play my AudioStream on the corresponding device. I have seen that I can get an AudioSink, AudioRenderer and AudioTrack objects. Again: Unfortunatly, I do not see an interface to pass them to the audio device. Can anyone help me with that?
One important note: I want to avoid to debug real hardware. Therefore, I added -DWEBRTC_DUMMY_FILE_DEVICES when building my WebRTC modules. It should write audio to an output file but the file just contains 0x00. The input file is read successfully because as I mentioned earlier, audio is send via RTP.
Finally, I found the solution: First, I have to say that my Code uses a WebRTC from 2017. So, the following things may have been changed and/or are fixed already:
After debugging my code and the WebRTC library I saw: When a remote stream is added, playback should start automatically. There is no need on the developer side to call playout() in the VoiceEngine or something comparable. When the library recognizes a remote audio stream, playback is paused, the new source is added to the mixer, and playback is resumed. The only APIs to control playback are provided by webrtc::MediaStreamInterface which is passed via the PeerConnectionObserver::OnAddStream. Examples are SetVolume() or set_enabled().
So, what went wrong in my case? I used the FileAudioDevice class which should write raw audio data to an output file instead of speakers. My implementation contains two bugs:
FileAudioDevice::Playing() returned true in any case. Due to this, the library added the remote stream, wanted to resume playout, called FileAudioDevice::Playing() which returned true and aborted because it thought the AudioDevice was already in playout mode.
There seems to be a bug in the FileWrapper class. The final output is written in FileAudioDevice::PlayThreadProcess() via _outputFile.Write(_playoutBuffer, kPlayoutBufferSize) onto disk. However, this does not work. Luckily, a plain C fwrite() as hacky bugfix does work.

Single producer and multiple consumer using C++

I'm using C++ and have a simple client .exe that when handed a file name, it does process it and return success or error code. I want to create a Windows C++ .exe that does the following and was looking for sample code to do it:
Start 4 (or x) client .exe as separate process (for ex. using CreateProcess)
While the list of the files is not empty
Send work to clients: Each client will process a sent file name and return either success
or error code
Once the list of files to process is empty (or the producer .exe shutdown) close the
4 clients (so they shutdown).
I did some research on this and found that pipes can be used to communicate between process. I found this sample app that does a communication between a server and client in c++: https://code.msdn.microsoft.com/windowsapps/CppNamedPipeServer-d1778534
The sample app does however sends request from client to server and get a response and I wanted to modify it or use a different sample app to do batch processing through having a common queue of work (or pipe that store this queue or batch of work) and send work to clients. I want to synchronize this work so as soon as client is done with one file, I'll send it another file to process.
Basically I want to create a sample application .exe that start multiple clients and send them work through inter-process communication. Any sample C++ code to do this is appreciated.
Thanks
Jeff Lacoste
You could have a look at boost. It has boost::interprocess where you can read about alot of ideas of what methods there are for IPC.
I personally never use boost::interprocess as I'm a huge fan of boost::asio, and just like for your purposes, it has everything you need ( except creating a process ).
And there is many many more to be found on google, and it is entirely opinion based what library to use or to directly use the native OS API, which is why I wonder this question is not closed yet.
As for your request to give "code samples", those 2 links contain samples to everything you listed regarding IPC, and it's open source, so you can look how the libraries communicate with the native OS API.

WebRTC stream webcam browser to C/C++ native application

I have some troubles with the WebRTC API (and most particularly RTCPeerConnection).
I have successfully managed to make a video call between two browsers : I know how to get the webcam stream with getUserMedia, I know how to contact a STUN server and react on 'onicecandidate' event and I know how to create the offer (and the answer in the other peer) and send the sdp.
I use WebSockets as a signalling channel.
What I need to do is process the video stream with C/C++ algorithms, so I am looking for a way to receive a RTCPeerConnection in C/C++ and receive a call in C/C++.
I have been trying to build and test Google's libjigle library (haven't succeeded yet, though (I'm on Archlinux)). But even when I succeed, I don't see how to re-use the code for my own case.
What I have done so far :
I understand how STUN servers work, ICE candidates and SDP sessions and how to create / process them in javascript
I managed to make peer-to-peer calls between two browsers (and even between a PC and my Android and this worked perfectly)
I managed to use libwebsockets to create a simple signalling server, in which I successfully receive the browser's ICE candidates and sdp messages.
What I am looking for :
A way to receive/parse/process in C/C++ what the browser sends i.e. ICE candidates, sdp, offer
A way to create the answer (the browser will always be the initiator) and receive/process the webcam stream.
What I have tried :
I have tried to have the webcam play in a HTML5 element, and periodically (~ 33ms) draw the frame in a , call getImageData() and send the array of (R,G,B,alpha) with a pure WebSocket connection. But even for a 100x100 px, grayscale frame (hence 10kB), I can only achieve ~7fps with a ~600 kb/s upload stream. This is why I want to use RTCPeerConnection which works on UDP
My constraints :
I need to run the native app in C or C++ because I have image/video processing algorithms that are implemented in C++ (I have seen a lot of Node.js-based servers but I can't have that : no way to call my algorithms)
I'd like to be able to run at roughly 30 fps so that this is relatively fluid for the user.
I can't use Flash or Silverlight : I need to stay HTML5 / javascript for the client
Conclusion :
Where I fail short is everything that deals with ICE candidates, SDP sessions and contact STUN server in C/C++ because unlike javascript there are no events ('onicecandidates', 'onaddstream', etc).
Thank you in advance for your help !

Receiving WebRTC call from a C++ native Windows application

I would like, from a native Windows application using C++, to receive video/audio data sent from a browser located in a remote location. It seems like WebRTC is the way to go for this.
Most information I find is about how to interact with the browser to write WebRTC apps, but it may case the data would be received by my C++ app. Is it correct that I would need to use the WebRTC Native Code package for this, which is described as being 'for browser developers'? Document is located here: http://www.webrtc.org/webrtc-native-code-package
And what if I want to send video/audio data that I generate (ie not directly coming from a webcam and microphone), would I be able to send it to the remote location browser?
Any sample code out there which does something like I'm trying to accomplish?
The wording in that link is a bit misleading. They intend people that are developing browsers to use the native code, and advise those that are developing "applications" in a browser to use the WebRTC API.
I have worked with their native code for over a year to develop an Android application that is capable of performing audio and / or video calls between other Android devices and to browsers. So, I a pretty sure that it is completely possible to to take their native code and create a Windows application (especially since they have example code that does that for Linux and Mac -- look at peerconnection client and peerconnection server for this). You might have to write and re-write code to get it to work on Windows.
As for as data that you generate. In the Android project that I worked with, we didn't rely on the Android device / system to provide us with video, we captured and sent that out our selves using the "LibJingle" / WebRTC libraries. So, I know that that is possible, as long as you provide the libraries with video data in the correct format. I would imagine that one would be able to do the same with audio, but we never fiddled with that, so I cannot say for sure.
And as for example code, I can only suggest Luke Weber's gitbug repositories. Although it is for Android, it might be of some help to look at how he interfaces with the two libraries. Probably the better code to look at is the peerconnection client stuff that comes in the "LibJingle" second of the native code. [edit]: That is located in /talk/examples/peerconection/client/ .
If you get lost from my use of "LibJingle", that will show you when I started working with all of this code. Sometime around July of 2013 they migrated "LibJingle" into the WebRTC "talk" folder. From everything that I have seen, they are the same thing, just with the location and named changed.

C++ stream server for HTML5 Audio

Is it possible to have the src of an HTML5 Audio tag be a C++ program, and for the C++ program to stream audio to the audio element? For example, let's say I have an HTML5 Audio element trying to get audio from a local program like so:
<audio src='file://(path to program)'>
If it is possible, which libraries should I use? I just want to try it locally for now, so file:// is what I want.
EDIT: Setting the source as file:// won't work, so how can I tell it to get audio from the specific C++ program?
I am not sure about the C++ side of the question, but trying to embed a would-be program via file: will not work, as the browser would simply read the binary file foo.exe instead of calling it and reading in the standard output (or whatever).
Instead, for testing purposes, you would probably like to run the server locally on your machine, referring to it via localhost.
Certainly if your C++ program was stand-alone, you could write/include a mini-web server to service only audio requests that come in and then execute whatever code you wanted to in C++ to return the data.
Otherwise you could write a C++ plugin/module to an existing web server like IIS or apache and configure the web server to direct traffic for a specific url to your C++ functions to return the data. This might be a little more complicated but alot more powerful by allowing you to focus more on your audio code than worrying about handling HTTP protocol and TCP connections.
In either case then your C++ code would be referenced the same as any webserver. "<audio src='http://localhost:port/etc'>