Stream video file while it's recording - c++

I am currently working on a student project, we have to create a live streaming service for videos with those constraints :
We capture the video from the Webcam with OpenCV
We want to stream the video while it's recorded
We have a file capture.avi that is saved to the computer, and while this file is saved, we want to stream it.
Currently, we have no idea how to do it, we don't know if the file transferred from A to B will be openable (Via VLC for example) in B, and if we won't have any interruption.
We plan to use RTSP for the network protocol. We code everything in C++.
Here the questions :
Does RTSP take care to stream a file that is being written
What format of the source should we use ? Should we stream the frames captured from OpenCV from A to B (So in B we have to use OpenCV to convert the frames to a video), or should we let OpenCV create a video file in A, and stream that video file from A to B ?
Thank you !

I don't believe it is safe to do so and what you need is two buffers.
The first would allow whatever library you want to use to write you recorded video to your local file system.
The later would allow your video to be streamed through your network.
Both should share the same context therefore the same data which would manage the synchronization of the two buffers.

Related

How to delete audio stream from video using Ffmpeg in c++

I want to delete audio stream from video and get the only video stream. But, when I search on google could not find any tutorial except decoding. Is there a way to delete a specific stream from video.
You cannot directly delete a stream from a file. You can, however, write a new file that contains all but one (or more) streams of the original file, which can be done without decoding and encoding the streams.
For this purpose, you can use libavformat, which is part of ffmpeg. You first have to demux the video file, which gives you packages that contain the encoded data for each stream inside the container. Then, you write (mux) these packages into a new video container. Take a look at the remuxing example for details.
Note, however, you can get the same result, by calling the ffmpeg program and passing it the apropriate parameters, which is probably easier [SO1].

Can i add and/or replace audio sources in oboe without restarting the audio stream?

I need to extract audio assets on the fly and load them in to a timeline for playback.
I also need to render varying lengths of the asset files, but I have an idea I'm going to try out tomorrow that will sort that I think, if anyone has any tips that would be great though.
I have been playing with oboe RhythmGame code which is the closest, of the oboe samples, to what i'm trying to do. But it's not happy when I try and add or change audio sources on the fly.
Is this something oboe can do or will I have to cycle the audio stream on and off for each new set of files?
What you're proposing can definitely be done without needing to restart the audio stream. The audio stream will just request PCM data on each callback. It's your app's job to supply that PCM data.
In the RhythmGame sample compressed audio files are decoded into memory using the DataSource object. The Player object then wraps this DataSource to control playback through the set methods.
If you need to play audio data from files in a timeline I would create a new Timeline class which copies the relevant sections of audio data from DataSources and places them sequentially into a buffer. Then your audio stream can read directly from that buffer on each callback.

Media Foundation add audio stream to video file

I was able to successfully encode an MP4 file which contains H.264 encoded video only (using IMFSinkWriter interface). Now I want to add an audio stream to it.
Whenever I try to create a sink writer for an audio using:
MFCreateSinkWriterFromURL(filePath, null, null, &pSinkWriter)
it deletes the previous file and writes only the audio (well, according to this link it is expected).
So my question is: how to add an audio stream to an existing file which contains only video stream?
Or, If I have both raw data from audio and video how do I encode both of them into a single media file (I suppose I have to do something called multiplexing. If so, can someone provide me helpful references)?
Sink Writer API creates a media file from scratch when you do IMFSinkWriter::BeginWriting to final completion when you do IMFSinkWriter::Finalize. You don't add new streams to finalized file (well, you can do it, but it works differently - see last paragraph below).
To create a media file with both video and audio you need to add two streams before you begin. Two calls IMFSinkWriter::AddStream, then two IMFSinkWriter::SetInputMediaType, then you start writing IMFSinkWriter::BeginWriting and you feed both video and audio data IMFSinkWriter::WriteSample providing respective stream index.
To adding a new stream to already existing file you need to create a completely new file. One of the options you have is to read already compressed data from existing file you have and write it to the new file using IMFSinkWriter::WriteSample method without re-compression. At the same time second stream can be written doing the compression. This way you can create a video and audio MP4 file by taking video from existing file and adding/encoding an additional audio track.

How to feed video data into a DirectShow filter to compress/encode and save to file?

First of all, here is what I'm trying to accomplish:
We'd like to add the capability to our commercial application to generate a video file to visualize data. It should be saved in a reasonably compressed format. It is important that the encoding library/codecs are licensed such that we're allowed to use and sell our software without restriction. It's also important that the media format can easily be played by a customer, i.e. can be played by Windows Media Player without requiring a codec pack to be installed.
I'm trying to use DirectShow in c++ by creating a source filter with one output pin that generates the video. I'm closely following the DirectShow samples called Bouncing Ball and Push Source. In GraphEdit I can successfully connect to a video renderer and see the video play. I have also managed to connect to AVI Mux and then file writer to write an uncompressed AVI file. The only issue with this is the huge file size. However, I have not been able to save the video in a compressed format. This problem also happens with the Ball and Push Source samples.
I can connect the output pin to a WM ASF Writer, but when I click play I get "This graph can't play. Unspecified error (Return code: 0x80004005)."
I can't even figure out how to connect to the WMVideo9 Encoder DMO ("These filters cannot agree on a connection"). I could successfully save to mjpeg, but compression was not very substantial.
Please let me know if I'm doing something wrong in GraphEdit or if my source filter code needs to be modified. Alternatively, if there is another (non-DirectShow) option that would work for me I'm open to suggestions. Thanks.
You don't give details to help you with your modification of the filters, however Ball sample generates output which can be written to a file.
Your choice of WM ASF Writer filter is okay - it is a stock filter and it is more or less easy to deal with. There is however a caveat: you need to select video only profile on the filter first, and then connect video input. WM ASF Writer won't run with an unconnected input pin, and in default state it also has an audio input. Of course this can also be done programmatically.
The graph can be as simple as this, and it can be run and it generates a playable file.

Process audio and video independently

due to the unpopularity of my last posts here and here , I'll try something else.
I have corresponding audio (.wav) and video files (.mpg). Let's consider that those two streams where recorded synchronously. I want to process both stream, with opencv for the images, and with "I don't know which audio lib" (you tell me ?) for audio, and I want to process those streams online and keep the synchronicity.
Note that the length of the video is less that 2 minutes.
Thanks for any help!
If you mean "play" then WAV files require little work to output to a PCM signal, if you are running Linux (and maybe for other OS's too with there respective audio IO libs) this can then be streamed to alsa.