How to set bus on apple's AudioUnit 3D Mixer? - c++

I'm trying to figure out how to query or setup which sound is connected to which bus input of the 3D Mixer. Say I have an explosion sound, is there a way to specifically set which bus that sound will be located on for future reference?

Assuming you are working with an AUGraph(), this is pretty straightforward:
Just connect the node to the desired busNumber of the mixer.
// connect the file player's output to the mixer's input
checkResult( AUGraphConnectNodeInput(audioGraph,
filePlayerNode,
0,
mixerNode,
1, // this is the first bus #1 "AUGraphConnectNodeInput(filePlayer:out -> mixer:in");

Related

Stream OpenCV cv::Mat image to website (html5 page)

I have c++ code running on a raspberry pi using OpenCV to process the camera input (form and color detection). Here is the thread where i capture my images from my pi cam:
(variables names are in french, sorry about that)
Mat imgOriginal;
VideoCapture camera;
int largeur = camPartage->getLargeur();
int hauteur = camPartage->getHauteur();
camera.open(0);
if ( !camera.isOpened() )
{
screen->dispStr(10,1,"Cannot open the web cam");
}
else
{
screen->dispStr(10,1,"Open the web cam");
camera.set(CV_CAP_PROP_FRAME_WIDTH,largeur);
camera.set(CV_CAP_PROP_FRAME_HEIGHT,hauteur);
camera.set(CV_CAP_PROP_FPS,30);
}
while(1)
{
if(largeur != camPartage->getLargeur() || hauteur != camPartage->getHauteur())
{
largeur = camPartage->getLargeur();
hauteur = camPartage->getHauteur();
camera.set(CV_CAP_PROP_FRAME_WIDTH,largeur);
camera.set(CV_CAP_PROP_FRAME_HEIGHT,hauteur);
}
camera.grab();
camera.retrieve(imgOriginal);
camPartage->setImageCam(imgOriginal); //shared object
if(thread.destruction == DESTRUCTION_SYNCHRONE)
{
pthread_testcancel();
}
usleep(20000);
}
Now, i want to stream those images to my website hosted on another raspberry pi. I have looked into gstreamer, ffmpeg and sockets but i didn't find any good example in c++ that worked for me. Im trying to get the lowest latency possible.
Some people suggested to use raspistill but i can't open the camera in another program since its already open by OpenCV.
If you need more information let me know, any help is appreciated.
If you need to stream your camera images from a RPi on the network, There are many approaches to do that, based on your needs.
One approach is to use high-level applications like MJPG streamer, RPi IP Camera, etc.
Another approach is, you can stream camera images throw a network (by RTP, UDP, etc) with GStreamer, FFmpeg, Raspistill, etc. With this approach, you need to have a receiver app to get streams (e.g FFmpeg).
There is also another approach which you already stated in your question and that is directly accessing the camera and capture images then transfer them manually throw network. With this approach, you have more freedom to modify the design (like adding your own compression, encryption, etc) but you should take care of the network protocol by yourself.
In your example, you can transfer each frame in network with a simple TCP/IP socket or you can build up a simple web server. It is obvious that you can't access the cam with two apps at the same time. You can use v4l2loopback to create multiple camera interfaces and access them by multiple apps but it won't solve your problem.
There are good projects like rpi-webrtc-streamer and streameye which uses low-level protocols to transfer images.

ParaView: Live point cloud visualization plugin

I am writing a ParaView version 5.1.2 plugin in C++ to visualize point cloud data produced by a LiDAR sensor. I noticed that Velodyne has an open source ParaView custom application to visualize their LiDAR data called Veloview. I tweaked some of their code to start but I am stuck now.
So far I wrote a reader that takes a pcap file and renders a point cloud that can be played back frame by frame. I also wrote a ParaView source that listens on a port and captures udp packets and after they are captured uses the reader to split them into frames and visualize the PC.
Now I would like to take live udp packets and render the point cloud in real time as each frame is completed.
I am having trouble accomplishing this because of the ParaView plugin structure. Currently, my reader displays a frame when the method RequestData is called. My method looks something like this.
int RequestData(vtkInformation *request, vtkInformationVector **inputVector, vtkInformationVector *outputVector){
vtkPolyData* output = vtkPolyData::GetData(outputVector);
vtkInformation* info = outputVector->GetInformationObject(0);
int timestep = 0;
if (info->Has(vtkStreamingDemandDrivenPipeline::UPDATE_TIME_STEP()))
{
double timeRequest = info->Get(vtkStreamingDemandDrivenPipeline::UPDATE_TIME_STEP());
int length = info->Length(vtkStreamingDemandDrivenPipeline::TIME_STEPS());
timestep = static_cast<int>(floor(timeRequest + 0.5));
}
this->Open();
// GetFrame returns a vtkSmartPointer<vtkPolyData> that is the frame
output->ShallowCopy(this->GetFrame(timestep));
this->Close();
return 1;
}
The RequestData method is called every time the timestep is updated in the ParaView gui. Then the frame from that timestep is copied into the outputVector.
I am not sure how to implement this with live data because in that circumstance the RequestData method is not called because no timesteps are requested. I saw there is a way to keep RequestData executing by using CONTINUE_EXECUTING() in this way.
request->Set(vtkStreamingDemandDrivenPipeline::CONTINUE_EXECUTING(), 1);
But I do not know if that is supposed to be used to visualize live data.
For now I am interested in simply reading live packets and throwing them away as soon as their frame is rendered. Does anyone know how I can achieve this?
In the code of VeloView (which basically is a bundled ParaView+LidarPlugin), the timesteps of ParaView is changed by the main code, not the Lidar Plugin.
We advice you to start from VeloView code, which is much closer to your goal.
If you really want to start from scratch within ParaView, you need to increment this requested timestep yourself.
Newest version of VeloView (unreleased) uses the same mechanism as ParaView “LiveSource” plugin (available in 5.6+), where the plugin tells ParaView to set a QtTimer that will automatically increment the available and requested timesteps.
request->Set(vtkStreamingDemandDrivenPipeline::CONTINUE_EXECUTING(), 1); relates to another mechanism that will run request Data multiple time, but won’t take care of updating the requested timestep.
Best,
Bastien Jacquet
VeloView project leader

windows: detect same device on both bluetooth api and setupapi

I'm currently creating a program that's divided in two parts, one where I detect nearby bluetooth devices and connect them to the pc if the name match and the other where I search for the device with setupapi and get an handle for HID comunication.
My problem is that I cannot find anything that tells me that the device I just connected is the same I found in setupapi.
So in the first part I have something like this:
BLUETOOTH_DEVICE_INFO btdi;
//--- Code omitted ---
BluetoothGetDeviceInfo(radio_handle, &btdi);
if(std::wstring(btdi.szName) == /*my name*/)
// Device found! now connect
BluetoothSetServiceState(radio_handle, &btdi, &HumanInterfaceDeviceServiceClass_UUID, BLUETOOTH_SERVICE_ENABLE);
And the setupapi related code:
SP_DEVICE_INTERFACE_DATA device_data;
device_data.cbSize = sizeof(SP_DEVICE_INTERFACE_DATA);
//--- Code omitted ---
SetupDiEnumDeviceInterfaces(device_infos, NULL, &hid_guid, index, &device_data);
I was thinking about using the bluetooth address of the device but there seem to be no way to get that from setupapi.
So, to recap, is there any way to get the address of the device from setupi? And, if not, is there any other way to be sure that they're both the same device?
Here I posted the code how to find Wiimote connected as HID using its MAC. You have to rework that code so it can use your HID device (change VID and PID).

How does veins calculate RSSI in a Simple Path Loss Model?

We are working on an application based on Veins framework which needs RSSI value of received signal and the distance between sender and receiver.
We referred to the VeReMi project which also calculates RSSI value and sends it to upper level.
We compared our simulation result (RSSI vs Distance) with the VeReMi dataset and they look quite different. Can you help us to explain how RSSI is calculated and whether our result is normal?
In our application, we obtain the distance and rssi value by
auto distance = sender.getPosition().distance(receiverPos);
auto senderRSSI = sender.getRssi();
In the lower level, the rssi is set in the Decider80211p::processSignalEnd(AirFrame* msg) method as in the VeReMi project.
if (result->isSignalCorrect()) {
DBG_D11P << "packet was received correctly, it is now handed to upper layer...\n";
// go on with processing this AirFrame, send it to the Mac-Layer
WaveShortMessage* decap = dynamic_cast<WaveShortMessage*>(static_cast<Mac80211Pkt*>(frame->decapsulate())->decapsulate());
simtime_t start = frame->getSignal().getReceptionStart();
simtime_t end = frame->getSignal().getReceptionEnd();
double rssiValue = calcChannelSenseRSSI(start, end);
decap->setRSSI(rssiValue);
phy->sendUp(frame, result);
}
Regarding the simulation configuration, our config.xml differs from VeReMi and there is no the following lines in our case.
<AnalogueModel type="VehicleObstacleShadowing">
<parameter name="carrierFrequency" type="double" value="5.890e+9"/>
</AnalogueModel>.
The 11p specific parameters and NIP settings in the omnetpp.ini are the same.
Also, our simulation is based on Boston map.
The scatter plot of our simulation result of RSSI_vs_Distance is shown in the following figure.
RSSI vs Distance from our simulation shows that even at distance beyond 1000 meters we still have received signal with strong RSSI values
In comparison, we extract data from VeReMi dataset and plot the RSSI vs Distance which is shown in following pic.
VeReMi dataset RSSI vs Distance is what we were expecting where RSSI decreases as distance increases
Can you help us explain whether our result is normal and what may cause the issue we have now? Thanks!
I am not familiar with the VeReMi project, so I do not know what value it is referring to as "the RSSI" when a frame is received. The accompanying ArXiV paper paper mentions no more details than that "the RSSI of the receiver" is logged on frame receptions.
Cursory inspection of the code for logging the dataset you mentioned shows that, on every reception of a frame, a method is called that sums up the power levels of all transmissions currently present at the receiver.
From this, it appears quite straightforward that (a) how far a frame traveled when it arrives at the receiver has only little relation to (b) the total amount of power experienced by the receiver at this time.
If you are interested in the Received Signal Strength (RSS) of every frame received, there is a much simpler path you can follow: Taking Veins version 5 alpha 1 as an example, your application layer can access the ControlInfo of a frame and, from there, its RSS, e.g., as follows:
check_and_cast<DeciderResult80211*>(check_and_cast<PhyToMacControlInfo*>(wsm->getControlInfo())->getDeciderResult())->getRecvPower_dBm(). The same approach should work for Veins 4.6 (which, I believe, the VeReMi dataset you are referring to is based) as well.
In simulations that only use SimplePathlossModel, Veins' version of a free space path loss model, this will result in the familiar curve:

Canon SDK (EDSDK) capture region of specified size for video stream

I am very new to the EDSDK so sorry for maybe weird question in some places.
Is it possible to access a video stream and perform some operations on it using the SDK? I need this to capture very thin region (ROI) of a specified size (for example 3840x10 px) for each frame in the stream. Don`t understand this as compression of a frame, aspect ratios are not needed to follow. These changes in theory should increase fps, because the region will be very thin (Should they?).
I found the code snippet below from the official documentation, although it seems this causes only to send a signal for starting and stopping video rec, without accessing the stream.
EdsUInt32 record_start = 4; // Begin movie shooting
err = EdsSetPropertyData(cameraRef, kEdsPropID_Record, 0, sizeof(record_start), &record_start);
EdsUInt32 record_stop = 0; // End movie shooting
err = EdsSetPropertyData(cameraRef, kEdsPropID_Record, 0, sizeof(record_stop), &record_stop);
I would be very thanksful for any suggestions and help. Please feel free to ask any additional information!
This sdk doesnt allow you to directly get access to hi res streams like industrial cams would. You can access over USB ~960x640 liveview images in sequential JPGs. Movie recording can only be done to internal card, and after stopping transfering the result. Outside of this SDk, use of an external HDMI recorder gives access to a near realtime feed at max FullHD1080p, depending on model and not always “clean”.