I'm writing an application on an embedded device which receives an RTP stream which carries G.729, PCM or H.264. The packets arrive to my application as a char* to the RTP packet. I would like to be able to see or listen to the stream (as a test), but on this device I don't have player. I thought I may forward this stream to a socket and play the RTP stream somewhere else, like on a Linux machine running a player. Would this be possible? I don't have RTSP, only RTP. Is VLC, for instance, a possible way to do this? Can I simply send the RTP packets to the socket to play them on the other side?
Thanks!
example of SDP that contains H.264 stream:
Server: rtsp server
Content-type: application/sdp
Content-base: rtsp://[some URL]
Content-length: 505
v=0
o=rtsp 1295996924 1590699491 IN IP4 0.0.0.0
s=RTSP Session
i=rtsp server
c=IN IP4 192.168.1.2
t=0 0
a=control:*
m=audio 0 RTP/AVP 97
a=rtpmap: 97 mpeg4-generic/8000/1
a=fmtp: 97 streamtype=5; profile-level-id=15; objectType=2; mode=AAC-hbr;
a=range:npt=now-
a=control:trackID=0
m=video 0 RTP/AVP 96
a=rtpmap:96 H264/90000
a=fmtp:96 profile-level-id=42E015; sprop-parameter-sets=Z0LgFdoHgtE=,aM4wpIA=; packetization-mode=1
a=range:npt=now-
a=framesize:96 480-352
a=control:trackID=1
no, you cannot. simple RTP doesn't contain any info about the stream format etc., only info about the packet itself: sequence number, timestamp, additional synchronization info. the simplest way to stream RTP is RTP/MPEG TS (MPEG Transport Stream).
unfortunately I don't know ready to use solution. VLC can stream (and play) such streams over UDP from a file, so it takes required info from file container format. Such solution could take external stream description in SDP format and your actual RTP packets
[EDIT] btw, it's weird that you receive just RTP stream w/o any description of its format, usually its description is provided somehow by RTSP, MPEG-TS or something else
You can forward RTP packets over a UDP socket.
Related
I'm trying to play out an incoming RTP audio stream using ffplay (or, alternatively, by using my own code which uses libav). The incoming stream is muxing RTP and RTCP packets. The playout works, but two local UDP ports are used:
The port I'm requesting
The port I'm requesting + 1 (which I guess is the RTCP port)
This is the ffplay command:
ffplay -loglevel verbose -protocol_whitelist file,udp,rtp test.sdp
And the content of the SDP file:
v=0
o=- 0 0 IN IP4 192.168.51.51
s=RTP-INPUT-1
c=IN IP4 192.168.51.61
t=0 0
m=audio 8006 RTP/AVP 97
b=AS:96
a=rtpmap:97 opus/48000/1
a=rtcp-mux
Note the line a=rtcp-mux. Even with this line present, two local UDP ports are used. I would expect this to be only 1 port.
I'm looking for a way to use only one UDP port.
Here's the relevant libav c++ code (I've left out error handling etc):
auto formatContext = avformat_alloc_context();
const AVInputFormat* format = av_find_input_format("sdp");
AVDictionary *formatOpts = nullptr;
av_dict_set(&formatOpts, "protocol_whitelist", "file,udp,rtp", 0);
int result = avformat_open_input(&formatContext, sdpFilepath, format, &formatOpts);
result = avformat_find_stream_info(formatContext, nullptr);
RTP always uses two ports, RTP flow is on an even-numbered port and RTCP control flow is on the next odd-numbered port.
Edit: https://www.rfc-editor.org/rfc/rfc8035
RFC8035 clarifies how to multiplex RTP and RTCP on a single IP address and port, referred to as RTP/RTCP multiplexing.
You are on the right track. After some digging in all those RFC, I suppose the best action is to check if the RTP stack of ffmpeg implements rfc5761 esp. section 4:
Distinguishable RTP and RTCP Packets
When RTP and RTCP packets are multiplexed onto a single port, the
RTCP packet type field occupies the same position in the packet as
the combination of the RTP marker (M) bit and the RTP payload type
(PT). This field can be used to distinguish RTP and RTCP packets
when two restrictions are observed: 1) the RTP payload type values
used are distinct from the RTCP packet types used; and 2) for each
RTP payload type (PT), PT+128 is distinct from the RTCP packet types
used. The first constraint precludes a direct conflict between RTP
payload type and RTCP packet type; the second constraint precludes a
conflict between an RTP data packet with the marker bit set and an
RTCP packet.
I am connection to the camera using live555 testRTSPClient application (http://www.live555.com/liveMedia/#testProgs)
./testRTSPClient rtsp://....
but it does not get any video stream. The problem seems to be that the camera tells in its SDP that it has two connections
...
s=/videoinput_1:0/h264_1/media.stm
c=IN IP4 0.0.0.0
m=video 11800 RTP/AVP 96
c=IN IP4 239.0.3.180/1
...
and testRTSPClient selects the last multicast one. In the Setup command the testRTSPClient tells following
...
User-Agent: ./testRTSPClient (LIVE555 Streaming Media v2017.06.04)
Transport: RTP/AVP;multicast;port=11800-11801
...
When connecting to another camera, which SDP contains only one connection (c=IN IP4 0.0.0.0) then everything is fine.
1) So the first question is that is it possible to force the testRTSPClient to select the UDP unicast instead? The ffplay streams from the camera nicely and Wireshark show that ffplay setups the transports with UDP unicast and not multicast.
2) Secondly, I am using my own C++ example similar to testRTSPClient. In the subSession setup I use the RTSPClient::sendSetupCommand function
rtspClient->sendSetupCommand(*subsession, continueAfterSetup, False, False, False);
but my program still does the Setup with multicast like the testRTSPClient. The parameter forceMulticastOnUnspecified does not seem to make any difference here. Currently, I see that the only option is to remove the second multicast connection substring from the SDP.
void continueAfterDescribe(RTSPClient* rtspClient, int resultCode, char* resultString)
{
...
char* const sdpDescription = resultString;
env << "Got a SDP description:\n" << sdpDescription << "\n";
// Hypothetical new code
if (user determined transport == UDP unicast)
removeSecondConnection(sdpDescription);
// Create a media session object from this SDP description:
MediaSession::createNew(env, sdpDescription);
...
But it is a hack in my mind. Is there any other option to select the unicast using live555 C++ API? I know that there is a TCP unicast option, but I am not interested in it for now.
Hello Stackoverflow Experts,
I am using DPDK on Mellanox NIC, but am struggling with applying the packet
fragmentation in DPDK application.
sungho#c3n24:~$ lspci | grep Mellanox
81:00.0 Ethernet controller: Mellanox Technologies MT27500 Family
[ConnectX-3]
the dpdk application(l3fwd, ip-fragmentation, ip-assemble) did not
recognized the received packet as the ipv4 header.
At first, I have crafted my own packets when sending ipv4 headers so I
assumed that I was crafting the packets in a wrong way.
So I have used DPDK-pktgen but dpdk-application (l3fwd, ip-fragmentation,
ip-assemble) did not recognized the ipv4 header.
As the last resort, I have tested the dpdk-testpmd, and found out this in
the status info.
********************* Infos for port 1 *********************
MAC address: E4:1D:2D:D9:CB:81
Driver name: net_mlx4
Connect to socket: 1
memory allocation on the socket: 1
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 127
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
strip on
filter on
qinq(extend) off
No flow type is supported.
Max possible RX queues: 65408
Max possible number of RXDs per queue: 65535
Min possible number of RXDs per queue: 0
RXDs number alignment: 1
Max possible TX queues: 65408
Max possible number of TXDs per queue: 65535
Min possible number of TXDs per queue: 0
TXDs number alignment: 1
testpmd> show port
According to DPDK documentation.
in the flow type of the info status of port 1 should show, but mine shows
that no flow type is supported.
The below example should be the one that needs to be displayed in flow types:
Supported flow types:
ipv4-frag
ipv4-tcp
ipv4-udp
ipv4-sctp
ipv4-other
ipv6-frag
ipv6-tcp
ipv6-udp
ipv6-sctp
ipv6-other
l2_payload
port
vxlan
geneve
nvgre
So Is my NIC, Mellanox Connect X-3 does not support DPDK IP fragmentation? Or is
there additional configuration that needs to be done before trying out the packet fragmentation?
-- [EDIT]
So I have checked the packets from DPDK-PKTGEN and the packets received by DPDK application.
The packets that I receive is the exact one that I have sent from the application. (I get the correct data)
The problem begins at the code
struct rte_mbuf *pkt
RTE_ETH_IS_IPV4_HDR(pkt->packet_type)
This determines the whether the packet is ipv4 or not.
and the value of pkt->packet_type is both zero from DPDK-PKTGEN and DPDK application. and if the pkt-packet_type is zero then the DPDK application reviews this packet as NOT IPV4 header.
This basic type checker is wrong from the start.
So what I believe is that either the DPDK sample is wrong or the NIC cannot support ipv4 for some reason.
The data I received have some pattern at the beginning I receive the correct message but after that sequence of packets have different data between the MAC address and the data offset
So what I assume is they are interpreting the data differently, and getting the wrong result.
I am pretty sure any NIC, including Mellanox ConnectX-3 MUST support ip fragments.
The flow type you are referring is for the Flow Director, i.e. mapping specific flows to specific RX queues. Even if your NIC does not support flow director, it does not matter for the IP fragmentation.
I guess there is an error in the setup or in the app. You wrote:
the dpdk application did not recognized the received packet as the ipv4 header.
I would look into this more closely. Try to dump those packets with dpdk-pdump or even by simply dumping the receiving packet on the console with rte_pktmbuf_dump()
If you still suspect the NIC, the best option would be to temporary substitute it with another brand or a virtual device. Just to confirm it is the NIC indeed.
EDIT:
Have a look at mlx4_ptype_table for fragmented IPv4 packets it should return packet_type set to RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_FRAG
Please note the functionality was added in DPDK 17.11.
I suggest you to dump pkt->packet_type on console to make sure it is zero indeed. Also make sure you have the latest libmlx4 installed.
I have a code of a streamer and a receiver written in cpp. I am trying to send packets that contain picture streaming from a video in the streamer (flv video) but in the receiver i get
these errors
.I have a sdp file in the receiver that contain the following data:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No name
t= 1 1000000
a=tool:libavformat 55.19.104
m=video 1234 RTP/AVP 117
c=IN IP4 127.0.0.1
b=AS:394
a=rtpmap:117 H264/90000
Does anyone know what is the cause of the problem and the best way to fix it?
I don't know if this is your only problem but the payload format for H.264 is "H264/90000"... the receiver may be seeing that as "90" instead. A messed up clock rate could explain what you're seeing with missing and late packets.
Problem
- I am working on a Streaming server & created a nonblocking socket using:
flag=fcntl(m_fd,F_GETFL);
flag|=O_NONBLOCK;
fcntl(m_fd,F_SETFL,flag);
Server then sends the Media file contents using code:
bool SendData(const char *pData,long nSize)
{
int fd=m_pSock->get_fd();
fd_set write_flag;
while(1)
{
FD_ZERO(&write_flag);
FD_SET(fd,&write_flag);
struct timeval tout;
tout.tv_sec=0;
tout.tv_usec=500000;
int res=select(fd+1,0,&write_flag,0,&tout);
if(-1==res)
{
print("select() failure\n");
return false;
}
if(1==res)
{
unsigned long sndLen=0;
if(!m_pSock->send(pData,nSize,&sndLen))
{
print(socket send() failure\n");
return false;
}
nSize-=sndLen;
if(!nSize)
return true; //everything is sent
}
}
}
Using above code, I am streaming a say 200sec audio file, which I expect that Server should stream it in 2-3secs using full n/w available bandwidth(Throttle off), but the problem is that Server is taking 199~200secs to stream full contents.
While debugging, I commented the
m_pSock->send()
section & tried to dump the file locally. It takes 1~2secs to dump the file.
Questions
- If I am using a NonBlocking TCP socket, why does send() taking so much time?
Since the data is always available, select() will return immediately (as we have seen while dumping the file). Does that mean send() is affected by the recv() on the client side?
Any inputs on this would be helpul. Client behavior is not in our scope.
Your client is probably doing some buffering to avoid network jitter, but it is likely still playing the audio file in real time. So, the file transfer rate is matched to the rate that the client is consuming the data. Since it is a 200 second audio file, it will take about 200 seconds to complete the transfer.
Because TCP output and input buffers are propably much smaller than the audio file, reading speed of the receiving application can slow down the sending speed.
When both the TCP output buffer of sender and the input buffer of receiver are both full, TCP stack of the sender is not able to receive any data from the sender. So sending will be blocked, until there is space.
If the receiver reads the TCP stream same speed as data is needed for playing. Then the transfer takes about 200 seconds. Or little bit less.
This can be avoided by using application layer buffering in the receiving end.
The problem could be that if the client side is using blocking TCP, plus is processing all the data on a single thread with no no buffer/queue etc right through to the "player" of the file, then your side being non-blocking will only speed things until you reach the point where the TCP/IP protocol stack buffers, NIC buffers etc are full. Then you will ultimately still only be able to send data as fast as the client side is consuming it. Remember TCP is a reliable, point-to-point protocol.
Where does your client code come from in your testing? Is it some sort of simple test client someone has written?