MultiPeer connectivity for Live Video Streaming between Phones that are connected.Which i have Done the streaming using this but the streaming is very slow and Quality of the Video is very low. So Decided to Add H264 encoding and decoding method. How can we Implement this Method using Peer to Peer.
My Question is "Is it Possible to encode and decode the video and Stream through MultiPeer connectivity which the Devices we are connected?"
I have used this Links for H264 And Multipeer Connectivity
H264 which is in objective-C i converted the code to Swift.
https://mobisoftinfotech.com/resources/mguide/h264-encode-decode-using-videotoolbox/
MultiPeerConnectivity which is in objective-C i converted the code to Swift
https://github.com/pj4533/AVCaptureMultipeerVideoDataOutput
Related
How Do I convert the RTP (provided by jetson.utils.videoOutput(“rtp://#:1234”)) to RTSP so that the same can ve streamed and be viewed over network.
I tried with FFserver as well as Streamer, but don't have much of an expertise in those.
The dev branch of jetson-inference/jetson-utils has native support for RTSP output. See:
https://forums.developer.nvidia.com/t/convert-rtp-to-rtsp/239731/3
Summary: How do I stream high quality video using WebRTC native?
I have an h264 stream that's 1920x1080 at about 30fps. I can currently stream this from a server on localhost to a native client on localhost just fine.
I wrote a WebRTC server using Google's WebRTC native library. I've written a VideoEncoder and VideoEncoderFactory that takes frames consisting of already encoded data and and broadcasts it over a video track. Using this I can send my h264 stream to the WebRTC server over a pipe and I can see the video stream in a browser.
However, any time something moves the video gets corrupted. It continues to play but is full of artifacts. Eventually I discovered that WebRTC is dropping some of my frames. When I attach a sequentially increasing ID to each frame before I pass it to rtc::AdaptedVideoTrackSource::OnFrame and I log this same ID in webrtc::VideoEncoder::Encode I can see that some of my frames simply disappear.
This kind of makes sense, I'm trying to stream high quality video over something meant for video chat and lowing my framerate fixes the corruption. However, I'm not asking the WebRTC library to do a lot, it's just forwarding already encoded data to a client on localhost. I have a native app that does this fine and I've seen one browser WebRTC client that can do this. Is there a field in the SDP or some configuration change that will allow me to stream my video?
This was the solution How to control bandwidth in WebRTC video call? .
I had heard about changing the offer sdp but dismissed it because I was told that the browser will accept unlimited bandwidth by default and that you'd only need to to this if you want to limit bandwidth. However, adding "b=AS:high number" has fixed all of my problems.
I want to be able to create an application that can read and publish an RTMP stream.
Using OpenCV i could read rtp due to it's ffmpeg backend.
Stream video from ffmpeg and capture with OpenCV
C++ RTMP is another possibility, but this is an RTMP server so it mainly requests and sends files. Although open source, i am unsure how to build or integrate this into a Visual Studio application in such a way as to make the function calls available to my project.
OTher sources indicate that OpenCV's RTSP isn't great.
http://workingwithcomputervision.blogspot.co.nz/2012/06/issues-with-opencv-and-rtsp.html
How can you run a streaming server, such as RTMP C++ and get the raw data out. OpenCV can encode and decode image data for streaming, but how can you link the two?
Could a C++ application pipe a stream together? How could i interface with that stream to send it more images? Also, for receiving images?
Regards,
cRMTPServer and LibRTMP works well.
I would like to develop a very tiny and small RTSP client to get the video stream from network cameras. Does anybody know where can I find a simple explanation of the protocol and some good examples?
Best regards,
You connect to the camera via RTSP protocol to query its capabilities, identify streams and prepare/start transmission.
RFC 2326 - Real Time Streaming Protocol (RTSP)
As a part of initialization and handshaking, you will discover available streams.
RFC 4566 - SDP Session Description Protocol
Then you will set up RTP session(s) to receive data, over UDP or sharing the same TCP connection.
RFC 3550 - RTP A Transport Protocol for Real-Time Applications
RFC 4571 - Framing Real-time Transport Protocol (RTP) and RTP Control Protocol (RTCP) Packets over Connection-Oriented Trans
To decode media streams you will convert the payload into pure data you need for further processing. With IP cameras your primary interest is perhaps MPEG-4 AVC (H.264):
RFC 3984 - RTP Payload Format for H.264 Video
RFC 6184 - RTP Payload Format for H.264 Video
This looks like some (introductory) reading.
Try GStreammer library. It is modular, wery flexible library, which can be used for streamming (both client and server). Just check the docs and pick right plugins.
GStreammer could be used in two ways: as a commandline tool or as a library in your project, depending on your requirements.
I'm working on an application where I want to stream h264 over mpegts
to a rtmp server (FMS, C++ RTMP Server, Wowza). I'm looking at the
output-example.c of libav. I stripped all the audio for now to keep it
simple.
I'm using this code as a test (not working):
https://gist.github.com/fb450aee77471a1d86f3#comments
What am I doing wrong there?
Thanks