network sync a looping gstreamer videos - c++

I am trying to frame sync two looping videos over a lan. Both videos have the same length but the resolution might differ. The following code works already for the first run:
server code
client code
As soon as the video reaches GST_MESSAGE_EOS it starts over which is fine. But the client however will keep on reaching EOS all the time. I think this is because the servers clock is already past the clients video length.
How can I fix this. Can I somehow reset the servers base time on EOS? And if so how?

After trying a lot of different approaches, I found that manually reconnecting to the server on each video end event did the trick. See the original server and client code links in the question for my solution.

Related

How to send end-user ip address with the youtube-dl request and get video download link. or there is another way to not get blocked by youtube

currently working on a youtube video downloader . using youtube-dl,Django and hosting my project on Pythonanywhere,i almost completed my project but when i send request more then 15-20 youtube block my pythonanywhere server ip. SO i want to ask that how can i solve this blocking problem. free proxies takes too much time to respond. please give me a solution.thanks in advance
I suspect that most YouTube downloaders do one of three things:
Execute client side code to do the actual download. Instead, what the server/extension does is go through the code to find a file being served.
Pay for professional proxy servers sufficient to handle the number of downloads one seeks to make without running into rate limits. Proxies are not expensive.
Limit the rate at which downloads are conducted.
Those are the only ways I can see around the blocking problem. Youtube really doesn't want you to do what you are trying to do and has put a lot of thought into stopping it.

Icecast multiple source same mountpoint/stream

I've been trying to find an answer to this question and not sure i possible.
The scenario:
My friend and I want to host a live stock trading alert broadcast. I have Icecast setup successfully on a linux server and am able to broadcast my voice using the BUTT encoder/client. This all works fine. But is there anyway to get my friend in a different location broadcasting on the same mountpoint/stream? I've tried starting BUTT as a second client on the same mountpoint, and it simply won't connect. I we set up a different mounpoint/stream, the end user (with a web player) can only listen to one stream at a time by default.
So is there anyway to mix the streams? Share the stream with two sources?
My only thought at this point is to have two web players on the web page, have them hidden and auto start them at the same time when the user gets to the page.
Thanks,
Max
It is not possible, Icecast is not intended for this usecase, so you might want to use something like Mumble to talk together and stream the Mumble audio to Icecast, instead of having both of you streaming to Icecast.

Available event for connection timeout for streams publishing over RTSP?

I use Wowza GoCoder to publish video to a custom Wowza live application. In my application I attach an IRTSPActionNotify event listener within the onRTPSessionCreate callback. In the onRecord callback of my IRTSPActionNotify I perform various tasks - start recording the live stream, among other things. In my onTeardown callback I then stop the recording and do some additional processing of the recorded video, like moving the video file to a different storage location.
What I just noticed was that if the encoder timeout, due to a lost connection, power failure or some other sudden event, I wont receive an onTeardown event - not even when the RTSP session timeout. This is a big issue for me, since I need to do this additional processing before I make the published stream available for on demand view through another application.
I've been browsing through the documentation looking for an event or a utility-class that could help me out, but so far to no avail.
Is there some reliable event, or other reliable way to know that a connection has timed out, so that I can trigger this processing also for streams that doesn't fire a teardown-event?
I first discovered this issue when I lost connection on my mobile device while encoding video using the Wowza GoCoder app for iOS, but I guess the issue would be the same for any encoder.
In my Wowza modules, I have the following pattern, which proved to be quite reliable so far:
I have a custom worker thread and that iterates over all client types. Now this allows me to keep track of clients, and I have found that eventually all kind of disasters lead to clients being removed from those lists after unclear timeouts.
I think try tracking (add / remove) clients in your own Set and see if that is more accurate.
You could also try and see if anything gets called in that case in IMediaStreamActionNotify2.
I have seen onStreamCreate(IMediaStream stream) and onStreamDestroy(IMediaStream stream) being called in ModuleBase in case of GoCoder on iOS, and I am attaching an instance of IMediaStreamActionNotify2 to the stream by calling stream.addClientListener(actionNotify)
On the GoCoder platform: I am not sure that it's the same on Android, the GoCoder Android and iOS versions have a fundamental difference, that is the streaming protocol itself, which leads to different API calls and behaviour on backend side. Don't go live without testing on both platforms.. :-)

Asynchronous server stopping getting data from client with no visible reason

I have a problem with client-server application. As I've almost run out of sane ideas for its solving I am asking for help. I've stumbled into described situation about three or four times now. Provided data is from last failure, when I've turned all the possible logging, messages dumping and so on.
System description
1) Client. Works under Windows. I take as an assumption that there is no problem with its work (judging from logs)
2) Server. Works under Linux (RHEL 5). It is server where I has a problem.
3) Two connections are maintained between client and server: one command and one for data sending. Both work asynchronously. Both connections live in one thread and on one boost::asio::io_service.
4) Data to be sent from client to server is messages delimeted by '\0'.
5) Data load is about 50 Mb/hour, 24 hours a day.
6) Data is read on server side using boost::asio::async_read_until with corresponding delimeter
Problem
- For two days system worked as expected
- On third day at 18:55 server read one last message from client and then stopped reading them. No info in logs about new data.
- From 18:55 to 09:00 (14 hours) client reported no errors. So it sent data (about 700 Mb) successfully and no errors arose.
- At 08:30 I started investigation of a problem. Server process was alive, both connections between server and client were alive too.
- At 09:00 I attached to server process with gdb. Server was in sleeping state, waiting for some signal from system. I believe I accidentally hit Ctrl + C and may be there was some message.
- Later in logs I found message with smth like 'system call interrupted'. After that both connections to client were dropped. Client reconnected and server started to worked normally.
- The first message processed by server was timestamped at 18:57 on client side. So after restarting normal work, server didn't drop all the messages up to 09:00, they were stored somewhere and it processed them accordingly after that.
Things I've tried
- Simulated scenario above. As server dumped all incoming messages I've wrote a small script which presented itself as client and sent all the messages back to server again. Server dropped with out of memory error, but, unfortunately, it was because of high data load (about 3 Gb/hour this time), not because of the same error. As it was Friday evening I had no time to correctly repeat the experiment.
- Nevertheless, I've run server through Valgrind to detect possible memory leaks. Nothing serious was found (except the fact that server was dropped because of high load), no huge memory leaks.
Questions
- Where were these 700 Mb of data which client sent and server didn't get? Why they were persistent and weren't lost when server restarted the connection?
- It seems to me that problem is someway connected with server not getting message from boost::asio::io_service. Buffer is get filled with data, but no calls to read handler are made. Could this be problem on OS side? Something wrong with asynchronous calls may be? If it is so, how could this be checked?
- What can I do to detect the source of problem? As i said I've run out of sane ideas and each experiment costs very much in terms of time (it takes about two or three days to get the system to described state), so I need to run as much possible checks for experiment as I could.
Would be grateful for any ideas I can use to get to the error.
Update: Ok, it seems that error was in synchronous write left in the middle of asynchronous client-server interaction. As both connections lived in one thread, this synchronous write was blocking thread for some reason and all interaction both on command and data connection stopped. So, I changed it to async version and now it seems to work.
As i said I've run out of sane ideas and each experiment costs very
much in terms of time (it takes about two or three days to get the
system to described state)
One way to simplify investigation of this problem is to run server inside some Virtual Machine until it reaches this broken state. Then you can make snapshot of whole system and revert to it every time when things go wrong during investigation. At least you will not have to wait 3 days to get this state again.

Winsock IOCP Server Stress Test Issue

I have a winsock IOCP server written in c++ using TCP IP connections. I have tested this server locally, using the loopback address with a client simulator. I have been able to get upwards of 60,000 clients no sweat. The issue I am having, is when I run the server at my house and the client simulator at a friends house. Everything works fine up until we hit around 3700 connections, after that every call to connect() fails from the client side with a return of 10060 (this is the winsock timed out error). Last night this number was 3700, but it has been around 300 before, and we also saw it near 1000. But whatever the number is, every time we try to simulate it, it will fail right around that number (within 10 or so).
Both computers are using Windows 7 Ultimate. We have also both modified the TCPIP registry setting MaxTcpConnections to around 16 million. We also changed the MaxUserPort setting from its 5000 default to 65k. No useful information is showing up in the event viewer. We also both watched our resource monitor, and we havent even gotten to 1% network utilization, the CPU is also close to 0% usage as well.
We just got off the phone with our ISP, and they are saying that they are not limiting us in any way but the guy was kinda unsure and ended up hanging up on us anyway after a 30 minute hold time...
We are trying everything to figure this issue out, but cannot come up with the solution. I would be very greatful if someone out there could give us a hand with this issue.
P.S. Both computers are on Verizon FIOS with the same verizon router. Another thing to note, the server is using WSAAccept and NOT AcceptEx. The client simulator is attempting to connect over many seconds though, so I am pretty sure the connects are not getting backlogged. We have tried to change the speed at which the client simulator connects, and no matter what speed it is set to it fails right around the same number each time.
UPDATE
We simulated 2 separate clients (on 2 separate machines) on network A. The server was running on network B. Each client was only able to connect half (about 1600) connections to the server. We were initially using a port below 1,000, this has been changed to above 50,000. The router log on both machines showed nothing. We are both using the Actiontec MI424WR verizon FIOS router. This leads me to believe the problem is not with the client code. The server throws no errors and has no unexpected behavior. Could this be an ISP/Router issue?
UPDATE
The solution has been found. The verizon router we were using (MI424WR revision C) is unable to handle any more than 3700 connections, we tested this with a separate set of networks. Thanks for the help guys!
Thanks
- Rick
I would have guessed that this was a MaxUserPort issue, but you say you've changed that. Did you reboot after changing it?
Run the test on the exact same computers on your local network (this will take the computers out of the equation).
The issue could be one of your routers not being up to the job?