I am developing a gstreamer application (plugin) that sinks from a video stream, analyzes each buffer for a certain condition and then if that condition is present passes the buffer to the plugin source. If the condition is not present for a given buffer, the buffer should be dropped and the plugin source should not receive this buffer.
Upon looking through the gstreamer documentation and tutorials as a newcomer to gstreamer, I cannot find a way for my plugin to "drop" a buffer.
Try using GstProbe for data buffers and return GST_PAD_PROBE_DROP or GST_PAD_PROBE_HANDLED when yours conditions are met.
If your plugin is based on GstBaseTransform, you should implement your own transform_frame_ip or transform_frame. If so, you can return GST_BASE_TRANSFORM_FLOW_DROPPED:
/**
* GST_BASE_TRANSFORM_FLOW_DROPPED:
*
* A #GstFlowReturn that can be returned from transform and transform_ip to
* indicate that no output buffer was generated.
*/
#define GST_BASE_TRANSFORM_FLOW_DROPPED GST_FLOW_CUSTOM_SUCCESS
Related
In my application(usermode), i receive audio data and save it use function:
VOID CSoundRecDlg::ProcessHeader(WAVEHDR * pHdr)
{
MMRESULT mRes=0;
TRACE("%d",pHdr->dwUser);
if(WHDR_DONE==(WHDR_DONE &pHdr->dwFlags))
{
mmioWrite(m_hOPFile,pHdr->lpData,pHdr->dwBytesRecorded);
mRes=waveInAddBuffer(m_hWaveIn,pHdr,sizeof(WAVEHDR));
if(mRes!=0)
StoreError(mRes,TRUE,"File: %s ,Line Number:%d",__FILE__,__LINE__);
}
}
pHdr Pointer points to the audio data(byte[11025])
How to I can get this data in sysvad using IOCTL. Thanks for help.
If I understand correctly you have an audio buffer than you want to send for output in sysvad. For this case scenario you would have to write the butter in using "writebytes"
please look at this example for more in depth details.
https://github.com/microsoft/Windows-driver-samples/blob/master/audio/sysvad/EndpointsCommon/minwavertstream.cpp
UPDATE
in answer to your comment:
Circular buffer is not a must it really depends from the implementation you want to do, the main point is to get the buffer in memory, writing it is simply like this
adapterObject->WriteEtwEvent(eMINIPORT_LAST_BUFFER_RENDERED,
m_ullLinearPosition + ByteDisplacement, // Current linear buffer position
m_ulCurrentWritePosition, // The very last WaveRtBufferWritePosition that the driver received
0,
0);
ideally you would use separation of concerns with the logic for reading and writing independent from each other, with the buffer object just passed between them
I am writing a ParaView version 5.1.2 plugin in C++ to visualize point cloud data produced by a LiDAR sensor. I noticed that Velodyne has an open source ParaView custom application to visualize their LiDAR data called Veloview. I tweaked some of their code to start but I am stuck now.
So far I wrote a reader that takes a pcap file and renders a point cloud that can be played back frame by frame. I also wrote a ParaView source that listens on a port and captures udp packets and after they are captured uses the reader to split them into frames and visualize the PC.
Now I would like to take live udp packets and render the point cloud in real time as each frame is completed.
I am having trouble accomplishing this because of the ParaView plugin structure. Currently, my reader displays a frame when the method RequestData is called. My method looks something like this.
int RequestData(vtkInformation *request, vtkInformationVector **inputVector, vtkInformationVector *outputVector){
vtkPolyData* output = vtkPolyData::GetData(outputVector);
vtkInformation* info = outputVector->GetInformationObject(0);
int timestep = 0;
if (info->Has(vtkStreamingDemandDrivenPipeline::UPDATE_TIME_STEP()))
{
double timeRequest = info->Get(vtkStreamingDemandDrivenPipeline::UPDATE_TIME_STEP());
int length = info->Length(vtkStreamingDemandDrivenPipeline::TIME_STEPS());
timestep = static_cast<int>(floor(timeRequest + 0.5));
}
this->Open();
// GetFrame returns a vtkSmartPointer<vtkPolyData> that is the frame
output->ShallowCopy(this->GetFrame(timestep));
this->Close();
return 1;
}
The RequestData method is called every time the timestep is updated in the ParaView gui. Then the frame from that timestep is copied into the outputVector.
I am not sure how to implement this with live data because in that circumstance the RequestData method is not called because no timesteps are requested. I saw there is a way to keep RequestData executing by using CONTINUE_EXECUTING() in this way.
request->Set(vtkStreamingDemandDrivenPipeline::CONTINUE_EXECUTING(), 1);
But I do not know if that is supposed to be used to visualize live data.
For now I am interested in simply reading live packets and throwing them away as soon as their frame is rendered. Does anyone know how I can achieve this?
In the code of VeloView (which basically is a bundled ParaView+LidarPlugin), the timesteps of ParaView is changed by the main code, not the Lidar Plugin.
We advice you to start from VeloView code, which is much closer to your goal.
If you really want to start from scratch within ParaView, you need to increment this requested timestep yourself.
Newest version of VeloView (unreleased) uses the same mechanism as ParaView “LiveSource” plugin (available in 5.6+), where the plugin tells ParaView to set a QtTimer that will automatically increment the available and requested timesteps.
request->Set(vtkStreamingDemandDrivenPipeline::CONTINUE_EXECUTING(), 1); relates to another mechanism that will run request Data multiple time, but won’t take care of updating the requested timestep.
Best,
Bastien Jacquet
VeloView project leader
I am trying to implement streaming audio and I've run into a problem where OpenAL is giving me an error codes seems impossible given the information in the documentation.
int buffersProcessed = 0;
alGetSourcei(m_Source, AL_BUFFERS_PROCESSED, &buffersProcessed);
PrintALError();
int toAddBufferIndex;
// Remove the first buffer from the queue and move it to
//the end after buffering new data.
if (buffersProcessed > 0)
{
ALuint unqueued;
alSourceUnqueueBuffers(m_Source, 1, &unqueued);
/////////////////////////////////
PrintALError(); // Prints AL_INVALID_OPERATION //
/////////////////////////////////
toAddBufferIndex = firstBufferIndex;
}
According to the documentation [PDF], AL_INVALID_OPERATION means: "There is no current context." This seems like it can't be true because OpenAL has been, and continues to play other audio just fine!
Just to be sure, I called ALCcontext* temp = alcGetCurrentContext( ); here and it returned a valid context.
Is there some other error condition that's possible here that's not mentioned in the docs?
More details: The sound source is playing when this code is being called, but the impression I got from reading the spec is you can safely unqueue processed buffers while the source is playing. PrintALError is just a wrapper for alGetError that prints if there is any error.
I am on a Mac (OS 10.8.3), in case it matters.
So far what I've gathered is that it seems this OpenAL implementation incorrectly throws an error if you unqueue a buffer while the source is playing. The spec says that you should be able to unqueue a buffer that has been marked as processing while the source is playing:
Removal of a given queue entry is not possible unless either the source is stopped (in which case then entire queue is considered processed), or if the queue entry has already been processed (AL_PLAYING or AL_PAUSED source).
On that basis I'm gonna say this is probably a bug in my OpenAL implementation. I'm gonna leave the question open in case someone can give a more concrete answer though.
To handle condition for multiple buffers use a loop.
Following works on iOS and linux :
// UN queue used buffers
ALint buffers_processed = 0;
alGetSourcei(streaming_source, AL_BUFFERS_PROCESSED, & buffers_processed); // get source parameter num used buffs
while (buffers_processed > 0) { // we have a consumed buffer so we need to replenish
ALuint unqueued_buffer;
alSourceUnqueueBuffers(streaming_source, 1, & unqueued_buffer);
available_AL_buffer_array_curr_index--;
available_AL_buffer_array[available_AL_buffer_array_curr_index] = unqueued_buffer;
buffers_processed--;
}
The function query_position(gst.FORMAT_BYTES, None)[0] returns me the no. of bytes in the pipeline after gstreamer has decoded the video/audio. I want to know the no. of bytes of the source file that were consumed to decode till this point of time. Is there a function in gstreamer API to do this?
Please read the seeking chapter from pygst docs. You can replace pos_int = self.player.query_position(gst.FORMAT_TIME, None)[0] with your version to get the bytes in real time. They are using thread object.
You can also add the timeout method. In Python its gobject.timeout_add(interval, callback, ...)
I have received the download data size in souphttpsrc source using onGotChunk event. This onGotChunk is MPEGDASH specific patch for souphttpsrc element.
In general
gboolean gst_element_query_duration (GstElement *element, GstFormat format, gint64 *duration); this API can be used. Pass source element as a 1st argument to this function and check.
I'm a beginner when it comes to libxml2, so here is my question:
I'm working at a small XMPP client. I have a stream that I receive from the network, the received buffer is fed into my Parser class, chunk by chunk, as the data is received. I may receive incomplete fragments of XML data:
<stream><presence from='user1#dom
and at the next read from socket I should get the rest:
ain.com to='hatter#wonderland.lit/'/>
The parser should report an error in this case.
I'm only interested in elements having depth 0 and depth 1, like stream and presence in my example above. I need to parse this kind of stream and for each of this elements, depth 0 or 1, create a xmlNodePtr (I have classes representing stream, presence elements that take as input a xmlNodePtr). So this means I must be able to create an xmlNodePtr from only an start element like , because the associated end element( in this case) is received only when the communication is finished.
I would like to use a pull parser.
What are the best functions to use in this case ? xmlReaderForIO, XmlReaderForMemory etc ?
Thank you !
You probably want a push parser using xmlCreatePushParserCtxt and xmlParseChunk. Even better would be to choose one of the existing open source C libraries for XMPP. For example, here is the code from libstrophe that does what you want already.