I am writing a ParaView version 5.1.2 plugin in C++ to visualize point cloud data produced by a LiDAR sensor. I noticed that Velodyne has an open source ParaView custom application to visualize their LiDAR data called Veloview. I tweaked some of their code to start but I am stuck now.
So far I wrote a reader that takes a pcap file and renders a point cloud that can be played back frame by frame. I also wrote a ParaView source that listens on a port and captures udp packets and after they are captured uses the reader to split them into frames and visualize the PC.
Now I would like to take live udp packets and render the point cloud in real time as each frame is completed.
I am having trouble accomplishing this because of the ParaView plugin structure. Currently, my reader displays a frame when the method RequestData is called. My method looks something like this.
int RequestData(vtkInformation *request, vtkInformationVector **inputVector, vtkInformationVector *outputVector){
vtkPolyData* output = vtkPolyData::GetData(outputVector);
vtkInformation* info = outputVector->GetInformationObject(0);
int timestep = 0;
if (info->Has(vtkStreamingDemandDrivenPipeline::UPDATE_TIME_STEP()))
{
double timeRequest = info->Get(vtkStreamingDemandDrivenPipeline::UPDATE_TIME_STEP());
int length = info->Length(vtkStreamingDemandDrivenPipeline::TIME_STEPS());
timestep = static_cast<int>(floor(timeRequest + 0.5));
}
this->Open();
// GetFrame returns a vtkSmartPointer<vtkPolyData> that is the frame
output->ShallowCopy(this->GetFrame(timestep));
this->Close();
return 1;
}
The RequestData method is called every time the timestep is updated in the ParaView gui. Then the frame from that timestep is copied into the outputVector.
I am not sure how to implement this with live data because in that circumstance the RequestData method is not called because no timesteps are requested. I saw there is a way to keep RequestData executing by using CONTINUE_EXECUTING() in this way.
request->Set(vtkStreamingDemandDrivenPipeline::CONTINUE_EXECUTING(), 1);
But I do not know if that is supposed to be used to visualize live data.
For now I am interested in simply reading live packets and throwing them away as soon as their frame is rendered. Does anyone know how I can achieve this?
In the code of VeloView (which basically is a bundled ParaView+LidarPlugin), the timesteps of ParaView is changed by the main code, not the Lidar Plugin.
We advice you to start from VeloView code, which is much closer to your goal.
If you really want to start from scratch within ParaView, you need to increment this requested timestep yourself.
Newest version of VeloView (unreleased) uses the same mechanism as ParaView “LiveSource” plugin (available in 5.6+), where the plugin tells ParaView to set a QtTimer that will automatically increment the available and requested timesteps.
request->Set(vtkStreamingDemandDrivenPipeline::CONTINUE_EXECUTING(), 1); relates to another mechanism that will run request Data multiple time, but won’t take care of updating the requested timestep.
Best,
Bastien Jacquet
VeloView project leader
Related
I am trying to control a robot using a template-based controller class written in c++. Essentially I have a UDP connection setup with the robot to receive the state of the robot and send new torque commands to the robot. I receive new observations at a higher frequency (say 2000Hz) and my controller takes about 1ms (1000Hz) to calculate new torque commands to send to the robot. The problem I am facing is that I don't want my main code to wait to send the old torque commands while my controller is still calculating new commands to send. From what I understand I can use Ubuntu with RT-Linux kernel, multi-thread the code so that my getTorques() method runs in a different thread, set priorities for the process, and use mutexes and locks to avoid data race between the 2 threads, but I was hoping to learn what the best strategies to write hard-realtime code for such a problem are.
// main.cpp
#include "CONTROLLER.h"
#include "llapi.h"
void main{
...
CONTROLLERclass obj;
...
double new_observation;
double u;
...
while(communicating){
get_newObs(new_observation); // Get new state of the robot (2000Hz)
obj.getTorques(new_observation, u); // Takes about 1ms to calculate new torques
send_newCommands(u); // Send the new torque commands to the robot
}
...
}
Thanks in advance!
Okay, so first of all, it sounds to me like you need to deal with the fact that you receive input at 2 KHz, but can only compute results at about 1 KHz.
Based on that, you're apparently going to have to discard roughly half the inputs, or else somehow (in a way that makes sense for your application) quickly combine the inputs that have arrived since the last time you processed the inputs.
But as the code is structured right now, you're going to fetch and process older and older inputs, so even though you're producing outputs at ~1 KHz, those outputs are constantly being based on older and older data.
For the moment, let's assume you want to receive inputs as fast as you can, and when you're ready to do so, you process the most recent input you've received, produce an output based on that input, and repeat.
In that case, you'd probably end up with something on this general order (using C++ threads and atomics for the moment):
std::atomic<double> new_observation;
std::thread receiver = [&] {
double d;
get_newObs(d);
new_observation = d;
};
std::thread sender = [&] {
auto input = new_observation;
auto u = get_torques(input);
send_newCommands(u);
};
I've assumed that you'll always receive input faster than you can consume it, so the processing thread can always process whatever input is waiting, without receiving anything to indicate that the input has been updated since it was last processed. If that's wrong, things get a little more complex, but I'm not going to try to deal with that right now, since it sounds like it's unnecessary.
As far as the code itself goes, the only thing that may not be obvious is that instead of passing a reference to new_input to either of the existing functions, I've read new_input into variable local to the thread, then passed a reference to that.
I am very new to the EDSDK so sorry for maybe weird question in some places.
Is it possible to access a video stream and perform some operations on it using the SDK? I need this to capture very thin region (ROI) of a specified size (for example 3840x10 px) for each frame in the stream. Don`t understand this as compression of a frame, aspect ratios are not needed to follow. These changes in theory should increase fps, because the region will be very thin (Should they?).
I found the code snippet below from the official documentation, although it seems this causes only to send a signal for starting and stopping video rec, without accessing the stream.
EdsUInt32 record_start = 4; // Begin movie shooting
err = EdsSetPropertyData(cameraRef, kEdsPropID_Record, 0, sizeof(record_start), &record_start);
EdsUInt32 record_stop = 0; // End movie shooting
err = EdsSetPropertyData(cameraRef, kEdsPropID_Record, 0, sizeof(record_stop), &record_stop);
I would be very thanksful for any suggestions and help. Please feel free to ask any additional information!
This sdk doesnt allow you to directly get access to hi res streams like industrial cams would. You can access over USB ~960x640 liveview images in sequential JPGs. Movie recording can only be done to internal card, and after stopping transfering the result. Outside of this SDk, use of an external HDMI recorder gives access to a near realtime feed at max FullHD1080p, depending on model and not always “clean”.
I am working on a project in which I am controlling a quadcopter with an android phone. I have noticed that the using the DroneListener to listen for ATTITUDE_UPDATED messages is not updating the yaw nearly as fast enough as I need it to. I would like to get the quadcopter’s yaw updated frequently, in (more or less) real time, such as how the Mission Planner application does when you plug your quadcopter in via usb.
I have tried using msg_request_data_stream (that is essentially what Mission Planner uses, but in C#) to request various data streams from the drone, but it has not worked. My assumption is that such the response to this request (i.e., the stream) would be received by any registered MavlinkObservers, however the custom MavlinkObserver that I have added to my drone has not had its onMavlinkMessageReceived(…) method called a single time. It has not even received heartbeat messages, which should (based on my understanding) be coming reqularly and frequently.
Is there any help that you could offer? Is there any better way to get the quadcopter’s yaw than waiting for the DroneListener to be notified of events? I know that it should be possible with Mavlink (MissionPlanner does it), but I haven’t been having any success with Mavlink messages or with a MavlinkObserver.
Here is my code for requesting a data stream:
msg_request_data_stream message = new msg_request_data_stream();
//not sure how doubles cast to byte, so I cast it to an int first (the value is
//always going to be an int anyway, getValue just returns doubles for parameters by default)
message.target_system = (byte)((int)params.getParameter("SYSID_THISMAV").getValue());
message.req_message_rate = (short)1;
message.req_stream_id = (byte)0;
message.start_stop = (byte)1;
//wrap the message and send it!
ExperimentalApi.sendMavlinkMessage(this, new MavlinkMessageWrapper(message));
The following code is inside a thread and reads input data coming over usb. Approximately every 80 readings it misses one of the packets coming from an stm32 board. The board is programmed to send data packets to the android tablet every one second.
// Non Working Code
while(running){
int resp = bulktransfer(mInEp,mBuf,mBuf.lenght,1000);
if(resp>0){
dispatchMessage(mBuf);
}else if(resp<0)
showsBufferEmptyMessage();
}
I was looking the Missile Launcher example in android an other libraries on the internet and they put a delay of 50ms between each poll. Doing this it solves the missing package problem.
//Working code
while(running){
int resp = bulktransfer(mInEp,mBuf,mBuf.lenght,1000);
if(resp>0){
dispatchMessage(mBuf);
}else if(resp<0)
showsBufferEmptyMessage();
try{
Thread.sleep(50);
}catch(Exception e){}
}
Does anyone knows the reason why the delay works. Most of the libraries on github has this delay an as I mention before the google example too.
I am putting down my results regarding this problem. After all seems that the UsbConnection.bulkTransfer(...) method has some bug when called continuously. The solution was to use the asynchronous API, UsbRequest class. Using this method I could read from the input endpoint without delay and no data was lost during the whole stress test. So the direction to take is asynchronous UsbRequest instead of synchronously bulktransfer.
I am creating a directshow filter which's purpose is to take 3 input pins and create a video which shows alternately vidoe from the first source, the second source and the third source, in a fixed time internal.
So if i have three webcam connected to my filter, i want the final video for example to show 5 seconds of the first cam, five seconds of the second cam, and so on...
I have tried two approaches:
Approach one
I use a class TimeManager. This class has a function isItPinsTurn(pinname). This functions returns true or false regarding if the pin is supposed to send sample to the output. To do this the TimeManager creates a new thread which sleeps every x seconds.
After it slept it changes to the current active inputpin to the next.
The result is that every x seconds the isItPinSTurn(pinname) function returns another pin. This way every pin only seconds output to the outputpin when it is its turn, hence i get the desired videos with x intervalls between the input cam.
The problem with this approach
Sleep doesn't seem to work in directshow filters. I get a runtime error:
abort() has been called
Approach two
I use the samples GetMediaTime method and a buffer which keeps track of how much video samples in terms of its mediatime, has already been sent to the output pin. This is best illustrated with code:
void MyFilter::acceptFilterInput(LPCWSTR pinname, IMediaSample* sample)
{
mylogger->LogDebug("In acceptFIlterInput", L"D:\\TEMP\\yc.log");
if (wcscmp(pinname, this->currentInputPin) == 0)
{
outpin->Deliver(sample);
LONGLONG timestart;
LONGLONG timeend;
sample->GetTime(×tart, &timeend);
*mediaTimeBuffer += timeend - timestart;
if (*mediaTimeBuffer > this->MEDIATIME)
{
this->SetNextPinActive(pinname);
*mediaTimeBuffer = 0;
}
}
}
When the filter starts the currentInputPin is set to pin0 (the first). Calls to acceptFilterInput (which is called by the the input pins receie function) adjust the mediaTimeBUffer with the size of the MediaSample-MediaTime. If this buffer is higher than MEDIATIME (which can for example be 5 (seconds)), the buffer is set back to zero and the next pin is set active.
Problems with this approach
I am not even sure if CMediaSample->GetMediaTime returns the data i need, as it seems to return negative numbers, which doesn't seem to make much sense. I didn't find useful information about the return value of GetMediaTime on the web.
You are expected to block execution (incoming calls to IPin::Receive) on input streams so that other streams could catch up on their own streaming threads. You typically achieve this by either using wait/synchronization APIs and functions, or by holding references on media samples so that input peer would block on empty allocator waiting for a media sample (buffer) to get available.
Yes Sleep works well, although polling is the worst of possible options.
Approach two does not make sense for me because I don't see any real synchronization there: there is no execution blocking, and there is no making pin active. You cannot force data on the input pin, you only can wait to get called with new media sample. So you should block accepting data on one input stream/pin until you get data on another.
Some useful relevant information on multiplexing:
How to make a DirectShow Muxer Filter - Part 1
How to make a DirectShow Muxer Filter - Part 2
GDCL MPEG-4 Multiplexer - available in source, and can multiplex data from 2+ streams