How to make manifold stream with dropping buffer? - clojure

Using core.async I'm able to easily create a channel with dropping buffer:
(async/chan (async/dropping-buffer 10))
Is it possible to create manifold stream with dropping buffer?

You can get the same result by creating a (s/stream 10), and using (try-put! msg 0) instead of just put!, such any puts to a full stream will immediately time out.

Related

Drop buffers in gstreamer

I am developing a gstreamer application (plugin) that sinks from a video stream, analyzes each buffer for a certain condition and then if that condition is present passes the buffer to the plugin source. If the condition is not present for a given buffer, the buffer should be dropped and the plugin source should not receive this buffer.
Upon looking through the gstreamer documentation and tutorials as a newcomer to gstreamer, I cannot find a way for my plugin to "drop" a buffer.
Try using GstProbe for data buffers and return GST_PAD_PROBE_DROP or GST_PAD_PROBE_HANDLED when yours conditions are met.
If your plugin is based on GstBaseTransform, you should implement your own transform_frame_ip or transform_frame. If so, you can return GST_BASE_TRANSFORM_FLOW_DROPPED:
/**
* GST_BASE_TRANSFORM_FLOW_DROPPED:
*
* A #GstFlowReturn that can be returned from transform and transform_ip to
* indicate that no output buffer was generated.
*/
#define GST_BASE_TRANSFORM_FLOW_DROPPED GST_FLOW_CUSTOM_SUCCESS

Send audio data from usermode to Sysvad (virtual audio driver) use IOCTL

In my application(usermode), i receive audio data and save it use function:
VOID CSoundRecDlg::ProcessHeader(WAVEHDR * pHdr)
{
MMRESULT mRes=0;
TRACE("%d",pHdr->dwUser);
if(WHDR_DONE==(WHDR_DONE &pHdr->dwFlags))
{
mmioWrite(m_hOPFile,pHdr->lpData,pHdr->dwBytesRecorded);
mRes=waveInAddBuffer(m_hWaveIn,pHdr,sizeof(WAVEHDR));
if(mRes!=0)
StoreError(mRes,TRUE,"File: %s ,Line Number:%d",__FILE__,__LINE__);
}
}
pHdr Pointer points to the audio data(byte[11025])
How to I can get this data in sysvad using IOCTL. Thanks for help.
If I understand correctly you have an audio buffer than you want to send for output in sysvad. For this case scenario you would have to write the butter in using "writebytes"
please look at this example for more in depth details.
https://github.com/microsoft/Windows-driver-samples/blob/master/audio/sysvad/EndpointsCommon/minwavertstream.cpp
UPDATE
in answer to your comment:
Circular buffer is not a must it really depends from the implementation you want to do, the main point is to get the buffer in memory, writing it is simply like this
adapterObject->WriteEtwEvent(eMINIPORT_LAST_BUFFER_RENDERED,
m_ullLinearPosition + ByteDisplacement, // Current linear buffer position
m_ulCurrentWritePosition, // The very last WaveRtBufferWritePosition that the driver received
0,
0);
ideally you would use separation of concerns with the logic for reading and writing independent from each other, with the buffer object just passed between them

Mixing two icecast stream with liquidsoap and stream it to icecast server

I trying to mix two stream with liquidsoap one on the left another on the right side how to mix it and stream it to icecast server.
I'm already stream those two stream with darkice
Here is my pseudo-code
stream1 = 'localhost/stream1' " streamed with darkice on my localmachine
stream2 = 'localhost/stream2' " streamed with darkice on my localmachine
stream3 = mix(stream1[on the left], stream2[on the right])
output.icecast(stream3)
Anyone have any idea? i'm new to this kind of problems.
You could use input.harbor to get the streams into liquidsoap, then mix them together.
source_1 = input.harbor('source1',port=9000)
source_2 = input.harbor('source2',port=9001)
mixed = add([source_1,source_2])
output.icecast(%vorbis,id="icecast",
mount="mystream.ogg",
host="localhost", password="hackme",
icy_metadata="true",description="",
url="",
mixed)
If the streams are already left/right panned, this should work. Otherwise liquidsoap does have a stereo.pan function.
liquidsoap has a built in a crossfade function that does what you want. For more advanced fading there is the smart crossfade function.

relation between QAudioInput bufferSize() and bytesReady() in QT

I am trying to understand the relation between bufferSize() and bytesReady() for QAudioInput class in QT.
Assume that I have:
m_audioInput = new QAudioInput(m_Inputdevice, m_format, this);
bs = m_audioInput->bufferSize();
br = m_audioInput->bytesReady();
When I look at the values of bs and br (these are default values and I did not change the buffer size), I see that bs is 5 times larger than br. So it looks like there is a buffer that holds 5 blocks of audio input data. My question:
Is this a circular buffer? If I have these:
m_input = m_audioInput->start();
connect(m_input, SIGNAL(readyRead()), SLOT(myFunc()));
Then when I perform a read by:
MainClass::myFunc()
{
qint64 l = m_input->read(m_buffer.data(), br);
.
.
}
Does it read from the buffer in a circular manner? i.e. if I perform read 2 times consequently after a readyRead() is emitted does the buffer pointer moves from 1 block to second block (if it has 5 blocks in total)?
Is there any documentation on the buffer pointer, and if it is a circular buffer, etc.?
Are there automatic read and write pointers to the buffer? Do I need to take care of those, or it is being taken care of automatically?
Any help and pointer related to this is very much appreciated.
I don't really understand your use case. First thing, I suppose when you call
br = m_audioInput->bytesReady();
you are either in QAudio::ActiveState or QAudio::IdleState. Otherwise br is just junk.
So it looks like there is a buffer that holds 5 blocks of audio input data.
A Sample is a unit of audio data. If by that you mean 5 samples then it is not correct. There is also no such thing as block of audio when it comes to non-encoded data.
You can compute how much seconds(or milliseconds) of audio are in your buffer:
Buffer size/ Sample size gives #samples
1/sampling frequency gives you sample size in seconds
sample size x #samples the size of buffer in seconds.
That's in mono mode (one channel). You need to divide by the number of channels
In Qt:
BuffersizeSeconds = (int)((1.0/m_format->sampleRate())
*(m_audioInput->bufferSize()/m_format->sampleSize())
*(1.0/m_format->channelCount())
);

parsing an XMPP stream with libxml2

I'm a beginner when it comes to libxml2, so here is my question:
I'm working at a small XMPP client. I have a stream that I receive from the network, the received buffer is fed into my Parser class, chunk by chunk, as the data is received. I may receive incomplete fragments of XML data:
<stream><presence from='user1#dom
and at the next read from socket I should get the rest:
ain.com to='hatter#wonderland.lit/'/>
The parser should report an error in this case.
I'm only interested in elements having depth 0 and depth 1, like stream and presence in my example above. I need to parse this kind of stream and for each of this elements, depth 0 or 1, create a xmlNodePtr (I have classes representing stream, presence elements that take as input a xmlNodePtr). So this means I must be able to create an xmlNodePtr from only an start element like , because the associated end element( in this case) is received only when the communication is finished.
I would like to use a pull parser.
What are the best functions to use in this case ? xmlReaderForIO, XmlReaderForMemory etc ?
Thank you !
You probably want a push parser using xmlCreatePushParserCtxt and xmlParseChunk. Even better would be to choose one of the existing open source C libraries for XMPP. For example, here is the code from libstrophe that does what you want already.