ICECAST - When does the listeners_peak stat is updated - icecast

I'm using IceCast 2.4.2 for my online radio server.
I've noticed that the stats have a listeners' peak stat that represents the number of maximum concurrent connections to my mount.
My question is when does this stat update? is this the peak for the day? the week? the hour?
official docs say
Peak concurrent number of listener connections for this mount point.

That value is for the life-cycle of the source client connection.
Any mountpoint specific values are limited to its life-cycle and will be reset upon disconnection of the source client. As at that point the mountpoint ceases to exist.
The global values are for the life-cycle of the Icecast server process (since it was started).

Related

Got an error reading communication packets in Google Cloud SQL

From 31th March I've got following error in Google Cloud SQL:
Got an error reading communication packets.
I have been using Google Cloud SQL for 2 years, but never faced with such problem.
I'm very worried about it.
This is detail error message:
textPayload: "2019-04-29T17:21:26.007574Z 203385 [Note] Aborted connection 203385 to db: {db_name} user: {db_username} host: 'cloudsqlproxy~{private ip}' (Got an error reading communication packets)"
While it is true that this error message often occurs after a maintenance period, it isn't necessarily a cause for concern as this is a known behavior by MySQL.
Possible explanations about why this issue is happening are :
The large increase of connection requests to the instance, with the
number of active connections increasing over a short period of time.
The freezing / unavailability of the instance can also occur due to
the burst of connections happening in a very short time interval. It
is observed that this freezing always happens with an increase of
connection requests. This increase in connections causes the
instance to be overloaded and hence unavailable to respond to
further connection requests until the number of connections
decreases or the instance stabilizes.
The server was too busy to accept new connections.
There were high rates of previous connections that were not closed
correctly.
The client terminated it abnormally.
readTimeout setting being set too low in the MySQL driver.
In an excerpt from the documentation, it is stated that:
There are many reasons why a connection attempt might not succeed.
Network communication is never guaranteed, and the database might be
temporarily unable to respond. Make sure your application handles
broken or unsuccessful connections gracefully.
Also a low Cloud SQL Proxy version can be the reason for such
incident issues. Possible upgrade to the latest version of (v1.23.0)
can be a troubleshooting solution.
IP from where you are trying to connect, may not be added to the
Authorized Networks in the Cloud SQL instance.
Some possible workaround for this issue, depending which is your case could be one of the following:
In the case that the issue is related to a high load, you could
retry the connection, using an exponential backoff to prevent
from sending too many simultaneous connection requests. The best
practice here is to exponentially back off your connection requests
and add randomized backoffsto avoid throttling, and potentially
overloading the instance. As a way to mitigate this issue in the
future, it is recommended that connection requests should be
spaced-out to prevent overloading. Although, depending on how you
are connecting to Cloud SQL, exponential backoffs may already be in
use by default with certain ORM packages.
If the issue could be related to an accumulation of long-running
inactive connections, you would be able to know if it is your case
using show full processliston your database looking for
the connections with high Time or connections where Command is
Sleep.
If this is your case you would have a few possible options:
If you are not using a connection pool you could try to update the client application logic to properly close connections immediately at the end of an operation or use a connection pool to limit your connections lifetime. In particular, it is ideal to manage the connection count by using a connection pool. This way unused connections are recycled and also the number of simultaneous connection requests can be limited through the use of the maximum pool size parameter.
If you are using a connecting pool, you could return the idle connections to the pool immediately at the end of an operation and set a shorter timeout by adjusting wait_timeout or interactive_timeoutflag values. Set CloudSQL wait_timeout flag to 600 seconds to force refreshing connections.
To check the network and port connectivity once -
Step 1. Confirm TCP connectivity on port 3306 with tcptraceroute or
netcat.
Step 2. If [Step 1] succeeded then try to check if there are any
errors in using mysql client to check timeout/error.
When the client might be terminating the connection abruptly you
could check for:
If the MySQL client or mysqld server are receiving a packet bigger
than max_allowed_packet bytes, or the client receiving a packet
too large message,if it so you could send smaller packets or
increase the max_allowed_packet flag value on both client
and server. If there are transactions that are not being properly
committed using both "begin" and "commit", there is the need to
update the client application logic to properly commit the
transaction.
There are several utilities that I think will be helpful here,
if you can install mtr and the tcpdump utilities to
monitor the packets during these connection-increasing events.
It is strongly recommended to enable the general_log in the
database flags. Another suggestion is to also enable the slow_query
database flag and output to a file. Also have a look at this
GitHub issue comment and go through the list of additional
solutions proposed for this issue here
This error message indicates a connection issue, either because your application doesn't terminate connections properly or because of a network issue.
As suggested in these troubleshooting steps for MySQL or PostgreSQL instances from the GCP docs, you can start debugging by checking that you follow best practices for managing database connections.

GCP Compute Engine limits download to 50 K/s?

From some reason download traffic from virtual machine on GCP (Google Cloud Platform) with Debian 9 is limited to 50K/s? Upload seems to be fine, inline with my local upload link.
It is the same with scp or https download. Any suggestions what might be wrong, where to search?
Machine type
n1-standard-1 (1 vCPU, 3.75 GB memory)
CPU platform
Intel Skylake
Zone
europe-west4-a
Network interfaces
Premium tier
Thanks,
Mihaelus
Simple test:
wget https://hrcki.primasystems.si/Nova/assets/download.test.html
Output:
--2018-10-18 15:21:00-- https://hrcki.primasystems.si/Nova/assets/download.test.html Resolving
hrcki.primasystems.si (hrcki.primasystems.si)... 35.204.252.248
Connecting to hrcki.primasystems.si
(hrcki.primasystems.si)|35.204.252.248|:443... connected. HTTP request
sent, awaiting response... 200 OK Length: 541422592 (516M) [text/html]
Saving to: `download.test.html.1' 0% [] 1,073,152 48.7K/s eta
2h 59m
Always good to minimize variables when trying to diagnose. So while it is unlikely the use of HTTP is why things are that very slow, you might consider using netperf or iperf3 to measure TCP bulk transfer performance between your VM in GCP and your local system. You can do that either "by hand" or via PerfKit Benchmarker https://cloud.google.com/blog/products/networking/perfkit-benchmarker-for-evaluating-cloud-network-performance
It can be helpful to have packet traces - from both ends when possible - to look at. You want the packet traces to be started before the test - it is important to see the packets used to establish the TCP connection(s). They do not need to be "full packet" traces, and often you don't want them to be. Capturing just the first 96 bytes of each packet would be sufficient for this sort of investigating.
You might also consider taking snapshots of the network statistics offered by the OSes running in your GCP VM and local system. For example, if running *nix taking a snapshot of "netstat -s" before and after the test. And perhaps a traceroute from each end towards the other.
Network statistics and packet traces, along with as many details about the two endpoints as possible are among the sorts of things support organizations are likely to request when looking to help resolve an issue of this sort.

Redis PUBSUB connection issue after idle period

I am using nelikelov/redisclient version 0.5.0 and I am using code same as in the PUBSUB example provided in the library. My application subscribes to a channel and receives messages.
What I am facing is that every Monday, the application is not being able to receive messages from Redis.
Is there any timeout that I should handle in case the connection remains idle during the weekend? Shall I configure something extra in my application or in Redis to bypass this?
I'm not familiar with the client you're using, but Redis itself doesn't close idle connections (PubSub or not) by default and keeps them alive. You can verify that your Redis server is configured to maintain idle connections and keep them alive by examining the values of the timeout and tcp-keepalive directives (0 and 300 by default, respectively).
Other than the above and given the periodical aspect of the disconnects, I'd investigate the network settings of the client application server.

Automatically detect change in rss feed

In my webscraper, I currently monitor a RSS feed and save the first title to a variable. Then I have a 1 Minute timer to go back and check the title against the first variable.
IFTTT has a nice way of doing it, but are there any python modules or libraries to automatically detect change? At the same time, I can't be certain if they are just running a very short timer or not.
Some RSS servers support a push protocol. Check the RSS Cloud element with feedparser. Then use xmlrpclib to establish a persistent connection with the RSS server.
Your server may support PubSubHubbub, which is similar.

Monitor Coldfusion on linux with monit?

We are trying to use monit to monitor services on our Ubuntu machine. I have successfully setup a host url check to make sure that coldfusion can render web pages and it there is an error to restart coldfusion.
I was wondering if there is a way to get more stats into monit by monitoring the coldfusion process. I have been unable to find out if coldfusion creates a pid file.
Does Coldfusion 9 or Jrun create a pid file for monit to use? Is there another way to monitor coldfusion with monit?
ColdFusion can output real-time performance metrics such as:
Page hits per second
Database accesses per second
Number of queued requests
Number of running requests
Number of timed out requests
Average queue time
Average request time
Average database transaction time
Bytes incoming per second
Bytes outgoing per second
You can learn more about the output of this logging here: http://help.adobe.com/en_US/ColdFusion/9.0/Admin/WSc3ff6d0ea77859461172e0811cbf3638e6-7fe0.html#WS9F365555-357A-4a15-AC72-449EF611E342
I would be interested to learn how you set this up once complete. I'll have the same task in a few weeks.
Thanks!
You will need to create the PID file with a wrapper script around your Java application. I'm doing the same thing myself these days. To the best of my understanding monit has to have the PID file to check the life of your service.