Icecast2: Mountpoint buffers at browser playback for a long time - icecast

When I try to play my icecast stream at a browser player or directly by visiting the mountpoint, it lasts up to one to two minutes till I hear any sound. Which setting at icecast does affect such behaviour? My server hardware can´t be the reason. Also the problem affects only the browser - with desktop player there is no buffering time. When I use shoutcast all web player loads just in time.
<icecast>
<location>Earth</location>
<admin>mail#test.test</admin>
<limits>
<clients>200</clients>
<sources>3</sources>
<threadpool>5</threadpool>
<queue-size>524288</queue-size>
<client-timeout>20</client-timeout>
<header-timeout>15</header-timeout>
<source-timeout>60</source-timeout>
<burst-on-connect>0</burst-on-connect>
<burst-size>65535</burst-size>
</limits>
<authentication>
<source-password>hackme</source-password>
<relay-password>hackme</relay-password>
<admin-user>admin</admin-user>
<admin-password>hackme</admin-password>
</authentication>
<!-- {%comment-open-if:icecast.directory.yp-url==""%} -->
<directory>
<yp-url-timeout>15</yp-url-timeout>
<yp-url>http://yp.shoutcast.com</yp-url>
</directory>
<directory>
<yp-url-timeout>15</yp-url-timeout>
<yp-url>http://www.oddsock.org/cgi-bin/yp-cgi</yp-url>
</directory>
<directory>
<yp-url-timeout>15</yp-url-timeout>
<yp-url>http://dir.xiph.org/cgi-bin/yp-cgi</yp-url>
</directory>
<!-- {%comment-close-if:icecast.directory.yp-url==""%} -->
<hostname>test.test</hostname>
<port>8008</port>
<bind-address>1.1.1.1</bind-address>
<!-- Only define a <mount> section if you want to use advanced options,
like alternative usernames or passwords -->
<mount>
<bitrate>128</bitrate>
<mount-name>/mp3</mount-name>
<fallback-override>0</fallback-override>
<fallback-when-full>0</fallback-when-full>
<public>1</public>
<max-listeners>150</max-listeners>
<fallback-mount></fallback-mount>
<genre>alternative</genre>
<type>audio/mpeg</type>
</mount>
<mount>
<bitrate>64</bitrate>
<mount-name>/mobile</mount-name>
<fallback-override>0</fallback-override>
<fallback-when-full>0</fallback-when-full>
<public>1</public>
<max-listeners>50</max-listeners>
<fallback-mount></fallback-mount>
</mount>
<fileserve>1</fileserver>
<paths>
<basedir>/usr/local/centovacast/var/vhosts/tester/</basedir>
<logdir>var/log/</logdir>
<webroot>web/</webroot>
<adminroot>admin/</adminroot>
<pidfile>var/run/server.pid</pidfile>
<alias source="/" dest="/status.xsl"></alias>
</paths>
<logging>
<accesslog>access.log</accesslog>
<errorlog>error.log</errorlog>
<playlistlog>playlist.log</playlistlog>
<loglevel>2</loglevel>
<!-- 4 Debug, 3 Info, 2 Warn, 1 Error -->
</logging>
<security>
<chroot>0</chroot>
</security>
</icecast>

Try to increase the queue-size and burst-size:
<queue-size>1048576</queue-size>
<burst-size>943718</burst-size>
I can´t explain it clearly, but you can follow this thread: https://www.internet-radio.com/community/threads/very-slow-buffering-in-android.20139/

When MP3 is streamed, it is segmented arbitrarily and sent as-is. That is, Icecast doesn't do any inspection or alignment of frames, meaning there can be a slight delay on the client when re-syncing to servers with small buffers.
However, a delay of a minute is an extremely long time. There was an issue with LAME (3.99.1 I think?) which was creating streams that Chrome had difficulty syncing to. You didn't mention what encoder you're using, but if it's using LAME, try upgrading or downgrading the version. If you're using something else, try switching to LAME. Also, try different browsers to see if you're getting this problem just in Chrome.
Progressive MP3 over HTTP (same as your setup) is the most common form of internet radio, and it can work just fine if set up correctly.

Related

Why isn't this gstreamer/hls.js configuration working on all browsers?

I am trying to get a live RTSP feed from a webcam to display on a website. I have a Linux server I am running gstreamer on and I am using hls.js to serve the feed up. I have followed a number of examples out there, but nothing I try can get this working across all browsers/devices. Here's what I have right now, and the results I am seeing.
Gstreamer config
This is my gstreamer script - I suspect the issue might be here with encoding settings, but I'm not sure what to try:
#!/bin/bash
gst-launch-1.0 -v -e rtspsrc protcols=tcp location=rtsp://XXX.XXX.XXX.XXX:XXXX/user=USER_password=PASSWORD_channel=1_stream=0.sdp?real_stream ! queue ! rtph264depay ! h264parse config-interval=-1 ! mpegtsmux ! hlssink location="%06d.ts" target-duration=5
index.html
Here is the webpage serving the feed up:
<!DOCTYPE html>
<head>
<title>Live Cam Test</title>
<script src="https://cdn.jsdelivr.net/npm/hls.js#latest"></script>
</head>
<body>
<video id="video" controls="controls" muted autoplay></video>
<script>
if (Hls.isSupported()) {
var video = document.getElementById('video');
var hls = new Hls();
// bind them together
hls.attachMedia(video);
hls.on(Hls.Events.MEDIA_ATTACHED, function () {
console.log("video and hls.js are now bound together !");
hls.loadSource("http://ServerName/live/playlist.m3u8");
hls.on(Hls.Events.MANIFEST_PARSED, playVideo);
});
}
</script>
</body>
</html>
Currently, this setup works the best in Chrome on Windows. The video is loaded, it autoplays, and it loads new segments as it plays, although it does seem to pause for a few seconds here and there and eventually gets a bit behind the live video.
On iOS devices, I cannot browse to the index.html page, I need to navigate directly to the playlist.m3u8 file. Once I do that, it appears to work pretty well.
On OSX, it doesn't appear to work in any browser...tried Chrome, Safari, Brave... I get weird results, sometimes it loads a single frame of the video and stops, sometimes it doesn't load anything.
I have tried the tutorials and code examples from hls.js's documentation and still no dice, so I think I must be doing something wrong in my gstreamer setup. Any help is much appreciated!

Aws - End of script output before headers: wsgi.py

I have a django application that does some heavy computation. It works very good with less data on my machine and on 'aws -elasticbeanstalk' as well. But When the data becomes large it on aws, gives, internal server error, and in the logs it shows:
[core:error]End of script output before headers: wsgi.py
However works fine on my machine
The code where it constantly gives this error is :
[my_big_lst[int(i[0][1])-1].appendleft((int(i[0][0]) - i[1])) for i in itertools.product(zipped_list,temp_list)]
where:
my_big_lst is a big list of deques
zipped_list is a large list of tuples
temp_list is a large list of numbers
It is notable, that as data grows large, the processing time also increases, and also that this problem is only coming on aws when data is large, and on my machine, it always works fine.
Update:
I worked out, that this error happens when the processing time exceeds 60 seconds, I also changed the Idle Loadbalancer time to 3600, but no effect, still error is there
Please anyone suggest a solution ?
If you are using a c-extension module, You could try setting
WSGIApplicationGroup %{GLOBAL}
in your virtualhost.
Something about python subinterpreters not working with c-extension modules. However since your code works for a smaller data set, your problem might be solved by setting memory-specific directives.
https://code.google.com/archive/p/modwsgi/wikis/ApplicationIssues.wiki#Python_Simplified_GIL_State_API

Getting hadoop pipes to work on os x with file access

my apologies if this question is trivial, but I haven't been able to solve it properly despite spending a few hours on google…
I have compiled and installed hadoop 2.5.2 from source on OS X 10.8. I think all went well with this, even though getting the native libraries compiled was a bit of a pain.
To verify my installation, I tried the examples that ship with hadoop like this:
$ hadoop jar /Users/per/source/hadoop-2.5.2-src/hadoop-dist/target/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar pi 2 5
which gives me back
Job Finished in 1.774 seconds
Estimated value of Pi is 3.60000000000000000000
So that seems to indicate that hadoop is at least working, I think.
Since my end goal is to get this working with file I/O from a C++ program using pipes, I also did try the following as an intermediate step:
$ hdfs dfs -put etc/hadoop input
$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar grep input output 'dfs[a-z.]+'
Which also seems to work (as in producing the correct output).
Then, finally, I tried the wordcount.cpp example, and this is not working, although I fail to understand how to fix it. The error message is quite clear.
2015-10-21 13:38:33,539 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user per
2015-10-21 13:38:33,678 INFO org.apache.hadoop.security.JniBasedUnixGroupsMapping: Error getting groups for per: getgrouplist: error looking up group. 5 (Input/output error)
So obviously there is something I don't get with the file permissions, but what? My hdfs-site.xml file looks like this, where I have tried to turn off permissions checks altogether.
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.web.ugi</name>
<value>per</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
But I am confused since I at least seem to be able to use grep on files in hdfs (per the built-in example above).
Any feedback would be greatly appreciated. I am running all of this as myself, so I haven't created a new user for only hadoop, since I am only running it locally on my laptop for now.
Edit: Update with some additional output from my terminal window, according to the discussion below:
<snip>
15/10/21 14:47:41 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
15/10/21 14:47:41 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
15/10/21 14:47:41 INFO mapred.LocalJobRunner: map task executor complete.
15/10/21 14:47:41 WARN mapred.LocalJobRunner: job_local676909674_0001
java.lang.Exception: java.lang.NullPointerException
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.NullPointerException
at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:104)
at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:69)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
<snip>
I can add more if that is relevant

Playing flv files from Cloudfront/S3 with Strobe Media Playback

I'm using the OSMF's Strobe Media Playback player to try and play files from AWS Cloudfront/S3
The bucket is called ct.recorder. The cloudfront distribution is called 1dm7svtk8jb00c.cloudfront.net, and it's origin is ct.recorder.
The video within the bucket is called vid_test001
I've tried initializing the player with rtmp://s34osaecrafusl.cloudfront.net/cfx/st/vid_test001
But that doesn't work.
I get Connection attempt rejected by FMS server. Connection failed.
I've also tried it with .flv at the end, but that doesn't work either.
Am I not linking to the file properly, or is it my player?
Well, I had an entire answer written up, speculating that it was related to bucket permissions, and now I'm scratching that answer and posting this, instead. :)
$ rtmpdump -r rtmp://s34osaecrafusl.cloudfront.net/cfx/st/vid_test001.flv -o testfile.flv
RTMPDump v2.4
(c) 2010 Andrej Stepanchuk, Howard Chu, The Flvstreamer Team; license: GPL
Connecting ...
WARNING: HandShake: client signature does not match!
INFO: Connected...
Starting download at: 0.000 kB
INFO: Metadata:
INFO: duration 13.82
INFO: videocodecid 2.00
INFO: audiocodecid 6.00
INFO: canSeekToEnd FALSE
INFO: createdby AMS 5
INFO: creationdate Tue Dec 03 13:41:46 2013
1190.238 kB / 13.82 sec (100.0%)
Download complete
This actually works for me... both with, and without, the .flv on the end, and the resulting file is a 7 second video of a guy looking at a webcam.
Using "smplayer" for Windows, I can connect to cloudfront with the rtmp:// url and stream the video, but it only works without the .flv on the end, using:
MPlayer Redxii-SVN-r36243-4.6.3 (C) 2000-2013 MPlayer Team
Custom build by Redxii, http://smplayer.sourceforge.net
Compiled against FFmpeg version N-52798-gf5846dc
Build date: Sun May 5 23:51:25 EDT 2013
This doesn't quite answer your question of why it isn't working, except to say that your player seems to be lying to you as far as "Connection attempt rejected by FMS server" because, at least from here, it's good, except for this part, and I don't know what it means.
WARNING: HandShake: client signature does not match!
However, that could just be a distraction.
It looks as if it's going to be your player... so trying other players would be worthwhile.
It is, of course, possible, that there's a regional issue involving the particular edge location inside cloudfront that you access from your location, which could be significantly different than the one I'm hitting, since it's geographically... but if another player works where you are, then you may have the answer you're looking for. Firing up wireshark and analyzing the protocol exchange could be an interesting exercise also.
Afterthought: the extra slash in your path could also be blowing something's mind, since an RTMP url apparently consists of two distinct components, "application"/"stream_name" and the point of delineation may be ambiguous at some level to some component in the chain. If cloudfront thinks the "application" is "cfx" and the stream is "st/vid_test001" but the client assumes the "application" is "cfx/st" with stream name "vid_test001" it seems like there could be some potential for interoperability trouble there. This is wild speculation, but perhaps worth experimentation, too.
The embed parameter urlIncludesFMSApplicationInstance needs to be set to true.

AppFabric Error # objCache.Get(key)

I am able to put/add 70MB of data to AppFabric successfully. But I am getting following error when I try to retrieve the same thru _cache.Get(key) method.
Please let me know what's wrong.
Let me know if its good this huge data
Error
"ErrorCode:SubStatus:The connection was terminated,
possibly due to server or network problems or serialized Object size
is greater than MaxBufferSize on server. Result of the request is
unknown."
Stack Trace
at Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ResponseBody respBody)
at Microsoft.ApplicationServer.Caching.DataCache.InternalGet(String key, DataCacheItemVersion& version, String region)
at Microsoft.ApplicationServer.Caching.DataCache.Get(String key)
Web Config at client
<dataCacheClient requestTimeout="150000" channelOpenTimeout="20000" maxConnectionsToServer="1">
<localCache isEnabled="false" sync="TimeoutBased" ttlValue="300" objectCount="10000"/>
<clientNotification pollInterval="300" maxQueueLength="10000"/>
<hosts><host name="MachineName" cachePort="22233"/></hosts>
<securityProperties mode="None" protectionLevel="None" />
<transportProperties connectionBufferSize="131072" maxBufferPoolSize="2147483647" maxBufferSize="2147483647" maxOutputDelay="2" channelInitializationTimeout="60000" receiveTimeout="2147483647"/>
</dataCacheClient>
Configuration at Server
<transportProperties maxBufferPoolSize="2147483647" maxBufferSize="2147483647" />
You should consider an alternate solution to put a 70MB object. AppFabric is meant for very frequently accessed volatile data, ideally below 256kB (looking at their benchmark results, which were available here).
Note: The benchmark results link seems to be broken at the moment.