I have an IP camera which can give me media-data by RTSP.
I develop an application for getting media-data.
I use C++ and Qt3.
I create socket. connect it to my device IP on port=554.
I do first query
SETUP rtsp://192.168.4.160/ufirststream RTSP/1.0\r\n
CSeq: 1\r\n
Transport: RTP/AVP; client_port=554\r\n\r\n
And get an answer:
RTSP/1.0 200 OK
CSeq: 1
Date: Sat, Mar 24 2012 17:24:59 GMT
Transport: RTP/AVP;unicast;destination=192.168.4.186;source=192.168.4.160;client_port=0-1;server_port=2000-2001
Session: 413F4DDB
I parse it for gettin session value, and do next query:
PLAY rtsp://192.168.4.160/ufirststream RTSP/1.0
CSeq: 1
Session: 413F4DDB
And server says:
RTSP/1.0 200 OK
CSeq: 1
Date: Sat, Mar 24 2012 17:25:02 GMT
Session: 413F4DDB
RTP-Info: url=rtsp://192.168.4.160/ufirststream/track1;seq=6716;rtptime=406936711
And how I can get media-data??? I thought that PLAY-method makes server to give me a stream, but it only gives me an url to rtsp and other info...
I need an binary stream from camera, can you give an advise for my next step??
The Transport header of the SETUP request indicates which protocol will be used to send the stream, and the client_port the ports on which your client will be listening.
Try opening 2 consecutive UDP ports and pass that range as client_port=port1-port2 instead of 554. These two ports will be used for the RTP and the RTCP streams (video and control data).
In addition, the RTP port number should be an even number, and the RTCP port the next odd number (See that question if you want the port range to be random rather than user selected).
Related
I am trying to build a custom live streaming service as documented here:
https://aws.amazon.com/solutions/implementations/live-streaming-on-aws/
I used the pre-provided cloudformation template for "Live Streaming on AWS with MediaStore" which provisioned all the relevant resources for me. Next, I wanted to test my custom streamer.
I used OBS Studio to stream my webcam output to MediaLivePushEndpoint that was created during AWS cloudformation provisioning. OBS Suggests that it is already streaming the webcam stream to the rtmp endpoint to AWS MediaLive RTMP endpoint.
Now, to confirm if I can watch the stream, when I try to set the Input Nerwork Stream in VLC player to the cloudfront endpoint that was created for me (which looks like this: https://aksj2arbacadabra.cloudfront.net/stream/index.m3u8), VLC is unable to fetch the stream and fails with the following error message in the logs. What am I missing? Thanks!
...
...
...
http debug: outgoing request: GET /stream/index.m3u8 HTTP/1.1 Host: d2lasasasauyhk.cloudfront.net Accept: */* Accept-Language: en_US User-Agent: VLC/3.0.11 LibVLC/3.0.11 Range: bytes=0-
http debug: incoming response: HTTP/1.1 404 Not Found Content-Type: application/x-amz-json-1.1 Content-Length: 31 Connection: keep-alive x-amzn-RequestId: HRNVKYNLTdsadasdasasasasaPXAKWD7AQ55HLYBBXHPH6GIBH5WWY x-amzn-ErrorType: ObjectNotFoundException Date: Wed, 18 Nov 2020 04:08:53 GMT X-Cache: Error from cloudfront Via: 1.1 5085d90866d21sadasdasdad53213.cloudfront.net (CloudFront) X-Amz-Cf-Pop: EWR52-C4 X-Amz-Cf-Id: btASELasdasdtzaLkdbIu0hJ_asdasdasdbgiZ5hNn1-utWQ==
access error: HTTP 404 error
main debug: no access modules matched
main debug: dead input
qt debug: IM: Deleting the input
main debug: changing item without a request (current 2/3)
main debug: nothing to play
Updates based on Zach's response:
Here are the parameters I used while deploying the cloudformation template for live streaming using MediaLive (notice that I am using RTMP_PUSH):
I am using MediaLive and not MediaPackage so when I go to MediaLive to my channel, I see this:
Notice that it says that it cannot find the "stream [stream]" but I confirmed that the rtmp endpoint I add to my OBS is exactly the one which was created as an output for me from my cloudformation stack:
Finally, when I try to go to media store to see if there are any objects, it is completely empty:
Vader,
Thank you for the clarification here, I can see the issue is with your settings in OBS. When you setup your input for MediaLive you created a unique Application Name and Instance. Which is part of the URI, the Application Name is LiveStreamingwithMediaStore and the Instance is stream, in OBS you are going to want remove stream from the end of the Server URI and place it in the Stream Key portion, where you currently have a 1.
OBS Settings:
Server: rtmp://server_ip:1935/Application_Name/
Stream Key: Instance_Name
Since you posted the screenshot here on an open forum, which really helped determine the issue, but does expose settings that would allow someone to send to the RTMP input I would suggest that you change the Application Name and Instance.
Zach
We are using openssl 1.0.2k for our TLS related functionalities.
In one of our deployment the client is able to complete the TLS handshakes using TLSv1.2 and was able to send application data towards server.After some requests the TLS connections closed from the server side with the below error
"error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number"
TLS handshake steps:
1. Client hello
2. Server Hello
3. Certificate,Certificate Request, Server hello done
4. Certificate,Client Key Exchange,Change Cipher spec,Encrypted handshake message
5. Change Cipher spec,Encrypted handshake message
6. Application data exchanges between client and server
7. Encrypted Alert(server to client)
8. Encrypted Alert( client to server
The error logs from server side says "error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number"
Can you please let us know the cause for this issue. If the ssl version is mismatching then the handshake phase should not succeed right?
But in our case handshake is successful and after some application data transfer our server is failing with this error.
If the ssl version is mismatching then the handshake phase should not succeed right?
No. Any TLS packet have header, and header has TLS version inside:
(
byte - record_type
byte[2] - version
byte[2] - length
) header
byte[length] - encrypted or raw data
Header is always in raw, it is never encrypted. Even if during handshake client sent TLS 1.2 version in all TLS packets, he can send another version after handshake is finished. Or someone in between can modify network traffic. In this case OpenSSL throws described error.
In my case, I was using OpenSSL for client functionality.
I was calling SSL_set_connect_state after SSL_connect. It should be called before.
SSL_set_connect_state (for client only) cleans up all the state!
snippet:
void SSL_set_connect_state(SSL *s)
{
s->server = 0;
s->shutdown = 0;
ossl_statem_clear(s);
s->handshake_func = s->method->ssl_connect;
clear_ciphers(s);
}
In my case:
1) Client <-> Server handshake succeeded.
2) SSL_write from client side (client sending message to server) lead to exact same error as mentioned in question (on server side)
I looked at pkt dump on server side.
read from 0x2651570 [0x2656c63] (5 bytes => 5 (0x5)) .
0000 - 16 03 01 01 e2 .....
ERROR
139688140752544:error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong >version number:s3_pkt.c:337:
1) 5 Bytes read in the above snipped is the size of SSL record. Server received data, and it attempted reading SSL record.
2) 1'st byte of the record is the SSL record type In this case ===> x16 => '22'
This itself is wrong, as far as server is concerned, handshake was successful and it was expecting application data. Instead it received data with SSL record for handshake, hence it was throwing the error.
A correct snippet of application data is as follows: 'x17' ==> 23
read from 0x2664f80 [0x2656c63] (5 bytes => 5 (0x5)) .
0000 - 17 03 03 00 1c
Since SSL_set_connect_state was called after connecting, client state was lost and SSL_write will attempt handshake if handshake wasnt performed before (client thought so as its state was lost!)
More data on these SSL records can be found here:
https://www.ibm.com/support/knowledgecenter/SSB23S_1.1.0.12/gtps7/s5rcd.html
I'm trying to get an ALB/Node.js/socket.io solution working in its simplest form and I'm running into an issue where the handshake disconnects. At the moment, I am intentionally using only one node in the TargetGroup to eliminate variables related to node switching and session stickiness for now.
When connecting directly to the node via my NAT instance, it works fine. The disconnect only happens when going thru the ALB.
Here is what I have set up:
ALB with Listener HTTP 80 -> 8081 (no SSL)
2 AZs, both with routes to internet (as required for ALBs)
One socket.io EC2 node in one of the AZs
Path Pattern for /socket.io/* to socket.io target group (with my one node in it)
Default pattern is also socket.io target group
Stickiness Enabled (should not need for one node, but did it anyway)
Here is what I see in the socket.io node client:
Thu, 22 Dec 2016 20:59:26 GMT socket.io-client:manager opening ws://52.72.198.58
Thu, 22 Dec 2016 20:59:26 GMT engine.io-client:socket creating transport "websocket"
Thu, 22 Dec 2016 20:59:26 GMT engine.io-client:socket setting transport websocket
Thu, 22 Dec 2016 20:59:26 GMT socket.io-client:manager connect attempt will timeout after 20000
Thu, 22 Dec 2016 20:59:26 GMT engine.io-client:socket socket close with reason: "transport close"
And here is what I see on the socket.io node server:
Thu, 22 Dec 2016 20:59:26 GMT socket.io:socket joined room U_qmSv_7gvP_JOFsAAAL
Thu, 22 Dec 2016 20:59:26 GMT socket.io:client client close with reason transport close
Thu, 22 Dec 2016 20:59:26 GMT socket.io:socket closing socket - reason transport close
When I go thru my NAT to the same socket.io ec2 node, it all works with no transport closes.
So somehow the ALB is closing the connection immediately during a successful handshake.
Since it works via the NAT, I think the socket.io node and client are ok. And since I see the DEBUG entries in node, I know the ALB is able to reach the socket.io node ok. And since I only have one single socket.io node, there should be no issues with sessions and stickiness.
What could be contributing to the immediate disconnect when using ALB?
EDIT : I have also found that if the socket.io client making the request to the ELB is on an EC2 node, then it works. This implies something in the network path between the client and the ELB. I've yet to find a case where this works other than when the client is on an EC2. It works everywhere via the NAT, just not via the ELB.
After lots of trial and error, I was able to determine this was due to a specific port range (80-83) in my case for the port the ALB/ELB is listening on. While the HTTP portion of the handshake works, the second TCP upgrade phase disconnects.
There were no restrictions in the VPC related to this port range, so the issue is in the network between my client and the ELB.
In conclusion, the issue is not anything in AWS or how I had set up resources, it lies elsewhere outside AWS. If I find the exact cause I will post a comment back to this answer.
socket = io.connect("https://mywebsite/myroom",{'reconnect':true});
Increased the HeartbeatTimeout and the closeTimeout on initialization
socket.on('connect', function(){
socket.socket.heartbeatTimeout = 500000;
socket.socket.closeTimeout = 500000;
socket.on('disconnect', function() {
socketConnectTimeInterval = setInterval(function () {
socket.socket.reconnect();
if(socket.socket.connected) {
clearInterval(socketConnectTimeInterval);
console.log('reconnected');
location.reload();
}
}, 0);
});
});
Also increase the Idle Timeout on the Load Balancer in AWS
Hopefully that should prevent the timeout issue !
I have implemented a webservice using gsoap c++, problem is i am getting a random 500 internal error with fault code as "End of file or no input: Operation interrupted or timed out".
with this i have verified the total time of request. All is validated within a matter of milli seconds.
also, i verified one successful response with the problmetic one, all the xml values are identical.
can any one suggest where i might be doing wrong?
following is chunk debug logs from SENT.log created by GSOAP server
<ResponseCode>00</ResponseCode><pDateTime>12055229</pDateTime><R1>null</R1><R2>null</R2><R3>null</R3><R4>null</R4>
HTTP/1.1 500 Internal Server Error
Server: gSOAP/2.8
Content-Type: text/xml; charset=utf-8
Content-Length: 456
Connection: close
SOAP-ENV:ClientEnd of file or no input: Operation interrupted or timed out (5 s recv delay)HTTP/1.1 500 Internal Server Error
Server: gSOAP/2.8
Content-Type: text/xml; charset=utf-8
Content-Length: 456
Connection: close"
Your timeout setting is perhaps too low (only 5 seconds), which means that the connection times out if no data was received within 5 seconds. Set soap->recv_timeout = 30 which increases the data receive timeout to 30 seconds. It depends on your application what timeout settings are acceptable, but 5 seconds is definitely tight.
Is it possible to do? I know about custom request; so I send custom request with text "DELE", and set message ID that I want to delete. As a result, curl_easy_perform hangs until timeout appears. On web forums people write advice to send also "QUIT" command after "DELE"; but how can I send "QUIT" command if libcurl hangs?
libcurl debug output follows:
* Connected to pop-mail.outlook.com (157.55.1.215) port 995 (#2)
* SSL connection using DES-CBC3-SHA
* Server certificate:
* subject: C=US; ST=Washington; L=Redmond; O=Microsoft Corporation; CN=*.
hotmail.com
* start date: 2013-04-24 20:35:09 GMT
* expire date: 2016-04-24 20:35:09 GMT
* issuer: C=BE; O=GlobalSign nv-sa; CN=GlobalSign Organization Validation
CA - G2
* SSL certificate verify result: unable to get local issuer certificate (
20), continuing anyway.
< +OK DUB0-POP132 POP3 server ready
> CAPA
< -ERR unrecognized command
> USER ************#hotmail.com
< +OK password required
> PASS ******************
< +OK mailbox has 1 messages
> DELE 1
< +OK message deleted
* Operation too slow. Less than 1000 bytes/sec transferred the last 10 seconds
> QUIT
* Operation too slow. Less than 1000 bytes/sec transferred the last 10 seconds
* Closing connection 2
So, the message is removed, but libcurl hangs until speed limit forces it to disconnect, which is bad idea. How to force it to stop after deleting of message and don't wait until timeout comes?
If you look at the libcurl documentation, CURLOPT_CUSTOMREQUEST says:
POP3
When you tell libcurl to use a custom request it will behave like a LIST or RETR command was sent where it expects data to be returned by the server. As such CURLOPT_NOBODY should be used when specifying commands such as DELE and NOOP for example.
That is why libcurl is hanging - it is waiting for more data that the server is not actually sending. So add CURLOPT_NOBODY to stop that waiting.
There's a recently added example code on the libcurl site showing exactly how to do this:
pop3-dele.c