I have upload .mp4 file on my server. The video file size is 58 MB. I have some question:
If 10 users stream this video at the same time then how much bandwidth they are used?
If video consuming too much bandwidth then what is alternative way?
I have not too much bandwidth.
The usage bandwidth is something like download a video file, for 10 user bandwidth =10*SizeOfVideoFile. If a user see all of video file and file size is for example 58, 58*10=580 MB of bandwidth used..if the user leave the video in middle of a video, then use 58/2 MB.
About Alternative way, you may use some Streamer Technique line HLS to to find target device or decrease the video size while streaming with for example ffmpeg. You could also use ffmpeg for generate HLS stream.
Related
I am writing a C++ program (Win64) using C++ Builder 11.1.5 that captures video from a web cam and stores the captured frames in a WMV file using the sink writer interface as described in the following tutorial:
https://learn.microsoft.com/en-gb/windows/win32/medfound/tutorial--using-the-sink-writer-to-encode-video?redirectedfrom=MSDN
The video doesn't need to be real time using 30 Frames per second as the process being recorded is a slow one so I have set the FPS to be 5 (which is fine.)
The recording needs to run for about 8-12 hours at a time and using the algorithms in the sink writer tutorial, I have seen the memory consumption of the program go up dramatically after 10 minutes of recording (in excess of 10 Gb of memory). I also have seen that the final WMV file only becomes populated when the Finalize routine is called. Because of the memory consumption, the program starts to slow down after a while.
First Question: Is it possible to flush the sink writer to free up ram while it is recording?
Second Question: Maybe it would be more efficient to save the video in pieces and finalize the recording every 10 minutes or so then start another recording using a different file name such that when 8 hours is done the program could combine all the saved WMV files? How would one go about combining numerous WMV files into one large file?
I need to stream webm video to browser from my video server.
The video server (C++) receives vp8 encoded frame packets of webcam or screen from the client with .ivf headers like <4_bytes_data_size><8_bytes_pts><vp8_encoded_data>. Also I send 4 bytes of total packet duration before the rest of data, so the server knows the presentation timestamp, size and duration of each frame.
The question is: which headers should I use for frames in order for the browser to be able to play the stream in the <video> tag. Maybe there is some standard for webm real time streaming implementing?
PS: AFAIK the webm consists of EBML markup. If the same is used in <video> tag to parse the stream, could someone explain me what are the minimal set of EBML elements for video playback (no audio, just video)?
Video tag does not support ivf. Minimum webm requirement is whatever the minimum is to package your stream.
I couldn't find any information on the way av_interleaved_write_frame deals with video and audio packets.
I have multiple audio and video packets coming from 2 threads. Each thread calls a write_video_frame or write_audio_frame, locks a mutex, initialize an AVPacket and writes data to an .avi file.
Initialization of AVCodecContext and AVFOrmatContext is ok.
-- Edit 1 --
Audio and video are coming from an external source (microphone and camera) and are captured as raw data without any compression (even for video).
I use h264 to encode video and no compression for Audio (PCM).
Audio captured is: 16bits, 44100khz, stereo
Video captured is 25FPS
Question:
1) Is it a problem if I write multiple video packets at once (let's say 25 packets/sec) and just one audio packet/sec.
Answer: Apparently not, the function av_interleaved_write_frame should be able to manage that kind of data as soon as pts and dts is well managed
This means I call av_interleaved_write_frame 25 times for video writing and just 1 for audio writing per second. Could this be a problem ? If it is how can I deal with this scenario ?
2) How can I manage pts and dts in this case ? It seems to be a problem in my application since I cannot correctly render the .avi file. Can I use real time stamps for both video and audio ?
Answer: The best thing to do here is to use the timestamp given when capturing audio / video as pts and dts for this kind of application. So these are not exactly real time stamps (from wall clock) but media capture timestamps.
Thank you for your precious advices.
av_interleaved_write_frame writes otput packets in such way so they are properly interleaved (maybe queueing them internally). "Properly interleaved" depends on a container format but usually it means that DTS stamps of the packets in output file are monotonically increasing.
av_interleaved_write_frame, like most FFmpeg APIs, shouldn't be ever called simultaneously by two threads with same AVFormatContext. I assume you make sure of that with a mutex. If you do then it doesn't matter whether it is multithreaded application or now.
Is it a problem if I write multiple video packets at once (let's say 25 packets/sec) and just one audio packet/sec
It is not a problem in general but most audio codecs can't output 1 second long audio packets. Which codec do you use?
How can I manage pts and dts in this case? Can I use real time stamps for both video and audio ?
Same way as you would in single-threaded application. Dts usually are generated by codecs from pts. Pts usually comes from a capture device/decoder together with the corresponding audio/video data.
Real time stamps might be ok to use but it really depends on how and when you are acquiring them. Please elaborate on what exactly you are trying to do. Where is audio/video data coming from?
I have written a rtsp ondemand server in C++ using live555 and I am able to host a rtsp stream. I then used VLC to connect to the server through the WAN and the image streams and looks great. Then I went to another computer and connected to the rtsp stream, I am seeing that both videos become choppy.
The data is h264 compressed and the resolution of the image is 800x600. The symptoms looks like there isnt enough bandwidth?
Basically my question is how many concurrent rtsp connections can be done over the WAN with live555. Has anyone else been able to stream reliably over the WAN using live555?
Thanks in advance.
This is mostly dependent on your WAN up-link bandwidth and your video bit-rate.
Let's try to estimate bit-rate of your given video. A very good explanation can be found here Assuming a moderate level of motion and 30 fps video this results in 3 mbps (800 x 600 x 30 x 3 x 0.07) bit-rate in your case. So if your up-link BW is less than 6 mbps, you cannot stream 2 videos simultaneously.
Other than that, live555 doesn't have any hard-coded limitations on this regard.
I receive audio packets from net (4 packets per second, 250ms each) and video - 15fps. Everything goes with my own timestamps. How should I sync them? I've seen the source code of one of our developers but he did syncing VIDEO according to audio. I.e. audio is always played immediately and video can be dropped or buffered. I don't think it is correct because audio can overrun video for a second or two - in that case we will not have actual video frames at all.
I'd like to know some basics in sync stuff. What should be buffered? Should audio and video in sync mode be played in separate thread(s)? Any clues would be regardful!
Thanks a lot!
I needed in smth like that - http://www.freepatentsonline.com/7680153.html
Pretty difficult to understand but I think this patent explains the basics of sync.