orocos ros integration, createStream causes execution to stuck in a loop - c++

I'm integrating orocos with ros, basically i created a component that read data from some input ports and write on output ports that create streams, because i want to public this values on ros topics and then read those values in plotjuggler.
So I created this component without any problem and work perfectly. After that i created used another ssd configured in the same way as the previos ssd in which i build this component that publish data on ROS but on this new ssd when i call the first createStream() it stuck without any error as if it is in an infinite loop. The only thing that i notice different from the other ssd is that i got the following message
[2022-05-06 12:33:28] [connect] WebSocket Connection 192.168.2.13:8081 v-2 "WebSocket++/0.7.0" /socket.io/?EIO=4&transport=websocket&t=1651833208 101
to create the stream I'm using, in the c++ code the following code
port.setName(<port_name>);
port.doc(<doc>);
port.createStream(rtt_roscomm::topic(<topic_name>));
i also tried to create the stream in the ops file, but i get exactly the same result, when it reach that point it stuck.
I repeat again that i have a working version of this component on a similar ssd, so this means that the code inside the component is should be correct
as well as the ops file.
I'm on ubuntu 18.04.06 LTS, I'm using ROS melodic.
Is 1 week that I'm working on this issue without success, so i tried a lot of different things but at the end when i create a stream it stuck.
Ros is working because if i create a topic and i publish something from terminal or a create a simple ros package i can send and receive data, also orocos is working because i have different component running without problem.
I know that with so few information is difficult to solve the problem but why when i create a topic, with createStream, i get stuck in this loop?
There is any known case in which it is possible to remain stacked in this way during the creation of a topic in ros?

Related

Client Server Sensor data collection and ROS in C++

Basically, I have different sensors and each sensor data is available at different ports (as a server)
Two things I need to do:
Write a client to connect to the server to get the data
Publish data to ROS
(I prefer writing my program in C++)
I am trying to searching information on the internet but still have several doubts.
Can one client connect to many different servers? I mean creating many sockets in one main function?
Does client/server and ROS publish/subscribe have to be the main function? or can I write them in normal functions and call these functions in the main function?
I can write the main function which able to get data from the server and publish it to ROS. But it means for each sensor I need to write a main function. Then I need to run many programs. It is certainly not a good solution. Also, each ROS publisher has a while loop, many loops in one main function may be a problem.
If anyone has experience with this please give me some hint on how I can build my program correctly?
Thank you.

Play AudioStream of WebRTC C++ Application with Audio Device

I wrote two command line applications in C++ which use WebRTC:
Client creates a PeerConnection and opens an AudioStream
Server receives and plays the AudioStream
The basic implementation works: They exchange SDP-Offer and -Answer, find their external IPs using ICE, a PeerConnection and a PeerConnectionFactory with the corresponding constraints are created, etc. I added a hook on the server side to RtpReceiverImpl::IncomingRtpPacket which writes the received payload to a file. The file contains valid PCM audio. Therefore, I assume the client streams data successfully through the network to the server application.
On the server side, my callback PeerConnectionObserver::OnAddStream is called and receives a MediaStreamInterface. Furthermore, i can iterate with my DeviceManagerInterface::GetAudioOutputDevices through my audio devices. So basically, everything is fine.
What is missing: I need some kind of glue to tell WebRTC to play my AudioStream on the corresponding device. I have seen that I can get an AudioSink, AudioRenderer and AudioTrack objects. Again: Unfortunatly, I do not see an interface to pass them to the audio device. Can anyone help me with that?
One important note: I want to avoid to debug real hardware. Therefore, I added -DWEBRTC_DUMMY_FILE_DEVICES when building my WebRTC modules. It should write audio to an output file but the file just contains 0x00. The input file is read successfully because as I mentioned earlier, audio is send via RTP.
Finally, I found the solution: First, I have to say that my Code uses a WebRTC from 2017. So, the following things may have been changed and/or are fixed already:
After debugging my code and the WebRTC library I saw: When a remote stream is added, playback should start automatically. There is no need on the developer side to call playout() in the VoiceEngine or something comparable. When the library recognizes a remote audio stream, playback is paused, the new source is added to the mixer, and playback is resumed. The only APIs to control playback are provided by webrtc::MediaStreamInterface which is passed via the PeerConnectionObserver::OnAddStream. Examples are SetVolume() or set_enabled().
So, what went wrong in my case? I used the FileAudioDevice class which should write raw audio data to an output file instead of speakers. My implementation contains two bugs:
FileAudioDevice::Playing() returned true in any case. Due to this, the library added the remote stream, wanted to resume playout, called FileAudioDevice::Playing() which returned true and aborted because it thought the AudioDevice was already in playout mode.
There seems to be a bug in the FileWrapper class. The final output is written in FileAudioDevice::PlayThreadProcess() via _outputFile.Write(_playoutBuffer, kPlayoutBufferSize) onto disk. However, this does not work. Luckily, a plain C fwrite() as hacky bugfix does work.

Tracking another drone with Opencv and Pixhawk2

I am working in a UAV-Team and my task is following and locking another UAV autonomously. I coded my opencv part and I used background substitution method and several filters. I can get absolute result from my code (like to forward go left , go right). My question is how can I send this result to the UAV's motors. How can I communicate with my UAV with C++? I've read lots of documentation from ardupilot, ardurov, opencv and pixhawk. But still couldn't figure it out.
Connect your microprocessor(i.e rasberry Pi) with pixhawk ans use mavlink communication protocols to send command here is link
And you can use dronekit also to do that.

Live streaming data from PC to PC in c++

I'm new in sending and receiving data from one PC to another. Currently i have an application in c++ generating data in Ubuntu where i need to stream those data from packet by packets to my mac. I have been told to use RapidJson and CRUL. But i found it difficult to understand. Im not sure which one will be the best solutions as match to my situation
1) An application in c++ running in Ubuntu continuous generating data at PC-a.
2) At PC-b i have an QT c++ running in Mac application where need to receive those data live from PC-a
3) At PC-b must know it must listening to only PC-a receiving data.
Is anyone can help to provide a sample simple example so that i follow?

Multiple input MFT in Microsoft Media Foundation

I'm struggling with mixing two audio streams into single output stream. MFNode has an AudioMixerMFT but TopoEdit crashes when I try to build a topology like this & execute it:
Note: I tried TopoEdit that comes with Windows SDK 7.1 & also the one with few fixes by the author of "Developing Microsoft® Media Foundation Applications"
I thought it could be some issue with TopoEdit so I built the Topology in code (by modifying the code from Ch#9 of "Developing Microsoft® Media Foundation Applications") but it still failed with 'E_UNEXPECTED Catastrophic failure' on mediaEvent->GetStatus(&hrStatus) inside HRESULT CPlayer::ProcessEvent(CComPtr<IMFMediaEvent>& mediaEvent) on Session Start event.
Now at this point I thought it could be some issue with AudioMixerMFT so I wrote a Custom MFT with 2 inputs that acts like a simple pass-through (Only sends 1st input & ignores 2nd one). And I built a topology in TopoEdit like and it worked:
But when I connected 'Audio 2.wav' to MFT, it crashed. Now I tried to use this custom MFT in my own code & it worked again with single input but failed with 'E_UNEXPECTED Catastrophic failure' when applied two inputs.
Not sure what could be the problem, I started to doubt if multiple input MFT is supported, I came across a post http://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/21596e11-c4e2-480a-b28f-9e2f5fa8820d/mutlinput-and-multioutput (yes it is quite old) that says it is not supported.
Is there anyone out there who was able to run AudioMixerMFT from MFNode successfully? Any alternates to Microsoft Media Foundation? or Any hint would be appreciated. Thanks
MFNode is my open source project.
If you read the MFNode's documentation, you will see that TopoEdit does not handle more than one inputstream in a MFT. And yes TopoEdit crashes. You can fix the bug in TopoEdit source code. It is just a null pointer that TopoEdit does not checked. But unfortunatly, it does not solve the problem. TopoEdit is not able to call ProcessInput twice on the two input streams, before calling ProcessOutput.
You have to provide a custom media session to make it work (implement IMFMediaSession).
In a next update of MFNode Project, i will provide a player to use all the MFNode, and especially the MFNode Audio Mixer.
EDIT: in tededit.cpp, TopoEdit crashes at CTedEditorVisualObjectEventHandler::NotifyObjectDeleted :
...
CTedTopologyNode* pNode = m_pEditor->FindNode(pConn->GetOutputNodeID());
...
pNode can be null pointer and TopoEdit does not check.
EDIT
I've updated my project. Check MFNodePlayer. I use a custom MediaSession to handle the wave mixer topology.
It works well but it is not perfect because of two things. If you stop the topo and then replay, it fails (because i must stop all source, and perhaps reset the time clock and bytestream). Second, there is a function wich handles IMFTransform in a recursive way. It is hard to debug.
I will fix later.
PS : Special thanks to "Developing Microsoft Media Foundation Applications" book. It helps me a lot to create a custom MediaSession.