I have a SW that performs some video analysis as soon as an event (alarm) happens.
Since I have not enough space on my embedded board, I should start recording the video only when an alarm happens;
The algorithm works on a video stored offline (it is not a real time algorithm, so the video should be stored, it doesn't suffice to attach to video stream).
At present time I'm able to attach to video and to store it as soon as I detect the alarm condition.
However I would like to analyze the data 10 seconds before the event happens.
Is it possible to pre-record up to 10 seconds as a FIFO queue, without storing the whole stream on disk?
I found something similar to my requirements here:
https://developer.ridgerun.com/wiki/index.php/GStreamer_pre-record_element#Video_pre-recording_example
but I would like to know if there is some way I can have the same result, without using the ridgerun tool.
Best regards
Giovanni
I think I mixed up my ideas, and both of them seem to be the similar.
What I suggest is the following :
Have an element that behaves like ringbuffer, though which you can stream backwards in time. A good example to try out might be the èlement queue. Have a look at time-shift buffering.
Then store the contents to a file on alarm, and use another pipeline that read from it. For eg. use tee or output-selector.
| -> ring-buffer
src -> output-selector -> |
|-> (on alarm) -> ringbuffer + live-src -> file-sink
From your question, I understand that your src might be a live camera, and hence doing this can be tricky. Probably you might have to implement your own plugin as done by the RidgeRun team, otherwise this solution is more of a hack rather than a meaningful solution. Sadly there aren't many references for such a solution, you might have to try it out.
Related
I have my own MediaSink in Windows Media Foundation with one stream. In the OnClockStart method, I instruct the stream to queue (i) MEStreamStarted and (ii) MEStreamSinkRequestSample on itself. For implementing the queue, I use the IMFMediaEventQueue, and using the mtrace tool, I can also see that someone dequeues the event.
The problem is that ProcessSample of my stream is actually never called. This also has the effect that no further samples are requested, because this is done after processing a sample like in https://github.com/Microsoft/Windows-classic-samples/tree/master/Samples/DX11VideoRenderer.
Is the described approach the right way to implement the sink? If not, what would be the right way? If so, where could I search for the problem?
Some background info: The sink is an RTSP sink based on live555. Since the latter is also sink-driven, I thought it would be a good idea queuing a MEStreamSinkRequestSample whenever live555 requests more data from me. This is working as intended.
However, the solution has the problem that new samples are only requested as long as a client is connected to live555. If I now add a tee before the sink, eg to show a local preview, the system gets out of control, because the tee accumulates samples on the output of my sink which are never fetched. I then started playing around with discardable samples (cf. https://social.msdn.microsoft.com/Forums/sharepoint/en-US/5065a7cd-3c63-43e8-8f70-be777c89b38e/mixing-rate-sink-and-rateless-sink-on-a-tee-node?forum=mediafoundationdevelopment), but the problem is either that the stream does not start, queues are growing or the frame rate of the faster sink is artificially limited depending on which side is discardable.
Therefore, the next idea was rewriting my sink such that it always requests a new sample when it has processed the current one and puts all samples in a ring buffer for live555 such that whenever clients are connected, they can retrieve their data from there, and otherwise, the samples are just discarded. This does not work at all. Now, my sink does not get anything even without the tee.
The observation is: if I just request a lot of samples (as in the original approach), at some point, I get data. However, if I request only one (I also tried moderately larger numbers up to 5), ProcessSample is just not called, so no subsequent requests can be generated. I send MeStreamStarted once the clock is started or restarted exactly as described on https://msdn.microsoft.com/en-us/library/windows/desktop/ms701626, and after that, I request the first sample. In my understanding, MEStreamSinkRequestSample should not get lost, so I should get something even on a single request. Is that a misunderstanding? Should I request until I get something?
I am creating a program where I show some graphical content, and I record the face of the viewer with the webcam using DirectShow. It is very important that I know the time difference between what's on the screen to when the webcam records a frame.
I don't care at all about reducing latency or anything like that, it can be whatever it's going to be, but I need to know the capture latency as accurately as possible.
When frames come in, I can get the stream times of the frames, but all those frames are relative to some particular stream start time. How can I access the stream start time, for a capture device? That value is obviously somewhere in the bowels of directshow, because the filter graph computes it for every frame, but how can I get at it? I've searched through the docs but haven't found it's secret yet.
I've created my own IBaseFilter IReferenceClock implementing classes, which do little more than report tons of debugging info. Those seem to be doing what they need to be doing, but they don't provide enough information.
For what it is worth, I have tried to investigate this by inspecting the DirectShow Event Queue, but no events concerning the starting of the filter graph seem to be triggered, even when I start the graph.
The following image recorded using the test app might help understand what I'm doing. The graphical content right now is just a timer counting seconds.
The webcam is recording the screen. At the particular moment that frame was captured, the system time was about 1.35 seconds or so. The time of the sample recorded in DirectShow was 1.1862 seconds (ignore the caption in the picture). How can I account for the difference of .1637 seconds in this example? The stream start time is key to deriving that value.
The system clock and the reference clock are both using the QueryPerformanceCounter() function, so I would not expect it to be timer wonkyness.
Thank you.
Filters in the graph share reference clock (unless you remove it, which is not what you want anyway) and stream times are relative to certain base start time of this reference clock. Start time corresponds to stream time of zero.
Normally, controlling application does not have access to this start time as filter graph manager chooses the value itself internally and passes to every filter in the graph as a parameter in IBaseFilter::Run call. If you have at least one filter of your own, you can get the value.
Getting absolute capture time in this case is a matter of simple math: frame time is base time + stream time, and you can always do IReferenceClock::GetTime to check current effective time.
If you don't have access to start time anyway and you don't want to add your own filter to the graph, there is a trick you can employ to define base start time yourself. This is what filter graph manager is doing anyway.
Starting the graphs in sync means using IMediaFilter::Run instead of IMediaControl::Run... Call IMediaFilter::Run on all graphs, passing this time... as the parameter.
try IReferenceClock::GetTime
Reference Clocks: https://msdn.microsoft.com/en-us/library/dd377506(v=vs.85).aspx
For more information here:
https://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/1dc4123a-05cf-4036-a17e-a7648ca5db4e/how-do-i-know-current-time-stamp-referencetime-on-directshow-source-filter?forum=windowsdirectshowdevelopment
I've used FileSystemWatcher in the past. However, I am hoping someone can explain how it actually is working behind the scenes.
I plan to utilize it in an application I am making and it would monitor about 5 drives and maybe 300,000 files.
Does the FileSystemWatcher actually do "Checking" on the drive - as in, will it be causing wear/tear on the drive? Also does it impact hard drive ability to "sleep"
This is where I do not understand how it works - if it is like scanning the drives on a timer etc... or if its waiting for some type of notification from the OS before it does anything.
I just do not want to implement something that is going to cause extra reads on a drive and keep the drive from sleeping.
Nothing like that. The file system driver simply monitors the normal file operations requested by other programs that run on the machine against the filters you've selected. If there's a match then it adds an entry to an internal buffer that records the operation and the filename. Which completes the driver request and gets an event to run in your program. You'll get the details of the operation passed to you from that buffer.
So nothing actually happens the operations themselves, there is no extra disk activity at all. It is all just software that runs. The overhead is minimal, nothing slows down noticeably.
The short answer is no. The FileSystemWatcher calls the ReadDirectoryChangesW API passing it an asynchronous flag. Basically, Windows will store data in an allocated buffer when changes to a directory occur. This function returns the data in that buffer and the FileSystemWatcher converts it into nice notifications for you.
I have a c++ object that accepts sound requests and plays them with ALSA. There is thread that processes the sound requests. Some sounds are periodic and are rescheduled after the wav file contents have been written to the ALSA library. Is there a way I can find out when all the data has been played? The function snd_pcm_writei is a blocking write function, but it does not necessarily mean that the file has been played.
One option that I am considering is to call snd_pcm_drain after playing each sound file, then call snd_pcm_prepare when I play the next file. Would this be an good solution? Or is this inefficient?
Update: The "drain solution" seems to work, but is not very efficient. The calls takes a while to return (maybe it cleans up some resources) and adds latency to the program. The latency is seen best when I play many small files consecutively. A few seconds of silence can be heard between each file; this is snd_pcm_drain executing.
Might not be correct (i've done very little work in this area), but from looking at the ALSA docs here: http://www.alsa-project.org/alsa-doc/alsa-lib/pcm.html
It looks like snd_pcm_status_t holds the status information that should give you an indication of whether the stream is currently processing data or not.
I am writing an application which is a kinda video streamer.The client is receiving a video stream using udp socket.Now as I am receiving the stream I want to play it simultaneous.It is different from playing local video file lying in your hard disk in which case it can be as simple as running the file using system("vlc filename").But here many issues are involved like there can be delay in receiving and player will have to wait for the incoming data.I have come to know about using vlc to run a video stream.Can you please elaborate the step for playing the stream using vlc.I am implementing my application in c++.
EDIT: Can somebody give me some idea regarding VLC API which can be used to stream a given video to particular destination and receive that stream at other end play it.
with regards,
Mawia
Well you can always take a look at VideoLan's own homepage
Other than that, streaming is quite straightforward:
Decide on a video codec that supports streaming. (ok obvious and probably already done)
Choose appropriate packet size.
Choose appropriate video quality.
At the client side: pre-buffer at least 2 secs of video and audio.
Number 2 and 3 sound strange, but they are worth thinking about:
If you have a broadband connection, you can afford to pump big packets over to the client. Note: Packets here means consistent units of data that the client needs to have completely to decode the next bit of video. If you send big packets, say 4 secs of video, you risk lag due to waiting for the complete data unit of, well, full 4 seconds, whilst small 0.5 sec packets would get you laggy but still recognizable and relatively fluent video on a bad connection.
Same goes for quality. Pixelated and artifact ridden videos are bad, stuttering video/sound desyncing videos are worse. Rather switch down to a lower quality/higher compression setting.
If your question is purely about the getting it done part, well, points 1 and 4 should do for you.
You might ask:
"If I want to do real time live video?"
All of the advice above still applies, but all of it has to be done smarter. First things first: You cannot do realtime over bad connections. It's a reality thing. If your connection is fat enough you can reach almost real time, just pump each image and a small sound sample out without much processing or any buffering at all. It is possible to get a good client experience from that, but connections like that are highly unlikely. The trick here usually is, transmit a video quality slightly lower than the connection would allow in theory and still wiggle caching and packet reordering in there... have fun. It is hard.
Unfortunately really the only API vlc has is the command line or equivalent of the command line (you can start player instances, passing them essentially what you would have on the command line). You can use libvlc if you need multiple instances or callbacks but it's pretty opaque still...