Liquidsoap 1.3.1 — Mountpoint in use - icecast

When I shut down my Icecast server there is occasionally a problem restarting it which forces me to reboot my computer.
The logs look like this
14:52:22 soap.1 | started with pid 9817
14:52:22 soap.1 | Warning: ignored expression at line 12, char 20-96.
14:52:22 soap.1 | 2017/09/12 14:52:22 >>> LOG START
14:52:22 soap.1 | 2017/09/12 14:52:22 [main:3] Liquidsoap 1.3.1 (git://github.com/savonet/liquidsoap.git#3adeff73df0cd369401c7b46caaab058ef80880b:20170608:111503)
14:52:22 soap.1 | 2017/09/12 14:52:22 [main:3] Using: bytes=[distributed with OCaml 4.02 or above] pcre=7.2.3 dtools=0.3.3 duppy=0.6.0 duppy.syntax=0.6.0 cry=0.5.0 mm=0.3.0 xmlplaylist=0.1.4 lastfm=0.3.1 ogg=0.5.1 opus=0.1.2 speex=0.2.1 mad=0.4.5 flac=0.1.2 flac.ogg=0.1.2 dynlink=[distributed with Ocaml] lame=0.3.3 gstreamer=0.2.2 fdkaac=0.2.1 theora=0.3.1 bjack=0.1.5 alsa=0.2.3 ao=0.2.1 samplerate=0.1.4 taglib=0.3.3 camomile=0.8.5 faad=0.3.3 soundtouch=0.1.8 portaudio=0.2.1 pulseaudio=0.1.3 ladspa=0.1.5 dssi=0.1.2 lo=0.1.1
14:52:22 soap.1 | 2017/09/12 14:52:22 [gstreamer.loader:3] Loaded GStreamer 1.2.4 0
14:52:22 soap.1 | 2017/09/12 14:52:22 [frame:3] Using 44100Hz audio, 25Hz video, 44100Hz master.
14:52:22 soap.1 | 2017/09/12 14:52:22 [frame:3] Frame size must be a multiple of 1764 ticks = 1764 audio samples = 1 video samples.
14:52:22 soap.1 | 2017/09/12 14:52:22 [frame:3] Targetting 'frame.duration': 0.04s = 1764 audio samples = 1764 ticks.
14:52:22 soap.1 | 2017/09/12 14:52:22 [frame:3] Frames last 0.04s = 1764 audio samples = 1 video samples = 1764 ticks.
14:52:22 soap.1 | 2017/09/12 14:52:22 [threads:3] Created thread "generic queue #1".
14:52:22 soap.1 | 2017/09/12 14:52:22 [threads:3] Created thread "generic queue #2".
14:52:22 soap.1 | 2017/09/12 14:52:22 [threads:3] Created thread "non-blocking queue #1".
14:52:22 soap.1 | 2017/09/12 14:52:22 [threads:3] Created thread "non-blocking queue #2".
14:52:22 soap.1 | 2017/09/12 14:52:22 [ogr:3] Connecting mount ogr for source#localhost...
14:52:22 soap.1 | 2017/09/12 14:52:22 [ogr:2] Connection failed: 403, Mountpoint in use (HTTP/1.0)
14:52:22 soap.1 | 2017/09/12 14:52:22 [ogr:3] Will try again in 3.00 sec.
14:52:22 soap.1 | strange error flushing buffer ...
14:52:22 soap.1 | strange error flushing buffer ...
14:52:22 soap.1 | 2017/09/12 14:52:22 [threads:3] Created thread "wallclock_main" (1 total).
14:52:22 soap.1 | 2017/09/12 14:52:22 [clock.wallclock_main:3] Streaming loop starts, synchronized with wallclock.
14:52:22 soap.1 | 2017/09/12 14:52:22 [fallback_9219:3] Switch to sine_9218.
My guess is that sometimes when it shuts down the old mountpoint isn't probably removed.
Is there a way to manually delete this mountpoint, or some other way to resolve this?
Many thanks.

I sometimes have the same problem. For whatever reason the first instance hasn't exited cleanly and is still listening on the addr/port combo of the mountpoint, preventing the new instance from binding to it. You can fix it without rebooting, you need to find the process causing the problem and then kill it.
For example let's say your mountpoint is listening on port 8800, you can use the lsof command to identify the old process. Add the -i option and specify the interface/port to return results for and you'll get something like this:
lsof -i:8800
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
liquidsoa 30511 liquid 20u IPv4 947691 0t0 TCP 192.168.1.5:8800 (LISTEN)
So here the offending pid would be 30511, if you kill that kill -9 30511 then liquidsoap should restart properly.
That's the basic concept covered, now let's make it a one liner.
We can add t to throw the terse option into the mix, telling lsof to dump the bits we don't need and just give us the info we are interested in, the pid(s) we want to kill:
lsof -ti:8800
30511
Our command now returns only the pid. Perfect, let's pipe it:
lsof -ti:8800 | xargs kill -9
Job done. lsof -ti:8800 should now return nothing and liquidsoap/icecast/whatever should start properly.

Related

Getting deployment error while installing PCF Dev

I am trying to install pcf dev on local machine which have window 10 using below link.
https://pivotal.io/platform/pcf-tutorials/getting-started-with-pivotal-cloud-foundry-dev/install-pcf-dev
During installation I am getting below error in deplo-pass.log
Task 576 | 10:31:39 | Preparing deployment: Preparing deployment
(00:01:46) Task 576 | 10:34:11 | Preparing package compilation:
Finding packages to compile (00:00:01) Task 576 | 10:34:12 | Creating
missing vms: database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) Task
576 | 10:34:12 | Creating missing vms:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) Task 576 | 10:34:12
| Creating missing vms: control/29cf5702-030d-4cac-ac9d-5e4221562e3a
(0) Task 576 | 10:34:12 | Creating missing vms:
compute/927d5018-9f8d-4b94-aa37-fee45aef2280 (0) Task 576 | 10:34:12 |
Creating missing vms: router/d3df4a57-43dd-491d-91ce-c9eda8ca88f8 (0)
Task 576 | 10:34:46 | Creating missing vms:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) (00:00:34) Task 576
| 10:34:48 | Creating missing vms:
router/d3df4a57-43dd-491d-91ce-c9eda8ca88f8 (0) (00:00:36) Task 576 |
10:34:48 | Creating missing vms:
compute/927d5018-9f8d-4b94-aa37-fee45aef2280 (0) (00:00:36) Task 576 |
10:34:49 | Creating missing vms:
database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) (00:00:37) Task 576
| 10:34:57 | Creating missing vms:
control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0) (00:00:45) Task 576 |
10:34:57 | Updating instance database:
database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) (canary) (00:06:47)
Task 576 | 10:41:44 | Updating instance blobstore:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) (canary) (00:01:03)
Task 576 | 10:42:47 | Updating instance control:
control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0) (canary) (01:22:36)
L Error: 'control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0)' is not running
after update. Review logs for failed jobs: routing-api,
cloud_controller_clock, credhub Task 576 | 12:05:25 | Error:
'control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0)' is not running
after update. Review logs for failed jobs: routing-api,
cloud_controller_clock, credhub
how to reviews logs of failing jobs? Is any ways to see logs failed jobs: routing-api, cloud_controller_clock, credhub ?
You need to install the bosh cli first. https://bosh.io/docs/cli-v2-install/
Make sure bosh is installed:
my-mac: bosh -v
version 6.2.1-a28042ac-2020-02-10T18:41:00Z
Succeeded
Set the environment variables for bosh to connect to pcf-dev:
my-mac: cf dev bosh
Usage: eval $(cf dev bosh env)
my-mac: eval $(cf dev bosh env)
Ask bosh to show the name of your cf deployment, in this case cf-66ade9481d314315358c is the name:
my-mac: bosh deployments
Using environment '10.144.0.2' as client 'ops_manager'
Name Release(s) Stemcell(s) Team(s)
cf-66ade9481d314315358c binary-buildpack/1.0.30 bosh-warden-boshlite-ubuntu-xenial-go_agent/170.30 -
bosh-dns/1.10.0
bosh-dns-aliases/0.0.3
bpm/1.0.3
capi/1.71.4
cf-cli/1.9.0
cf-networking/2.18.2
cf-syslog-drain/8.1
cflinuxfs2/1.267.0
cflinuxfs3/0.62.0
consul/198
consul-drain/0.0.3
credhub/2.1.2
diego/2.22.1
dotnet-core-buildpack/2.2.5
garden-runc/1.18.0
go-buildpack/1.8.33
java-offline-buildpack/4.16.1
log-cache/2.0.1
loggregator/103.4
loggregator-agent/2.3
nats/26
nodejs-buildpack/1.6.43
php-buildpack/4.3.70
push-apps-manager-release/667.0.6
pxc/0.14.2
python-buildpack/1.6.28
routing/0.184.0
ruby-buildpack/1.7.31
silk/2.18.1
staticfile-buildpack/1.4.39
statsd-injector/1.5.0
uaa/66.0
1 deployments
Succeeded
Retrieve te logs with bosh using the name from the name column:
my-mac: bosh --deployment cf-66ade9481d314315358c logs
Using environment '10.144.0.2' as client 'ops_manager'
Using deployment 'cf-66ade9481d314315358c'
Task 784
Task 784 | 17:54:41 | Fetching logs for router/8828d2fd-10f6-4d1e-9a3f-ac3d6ef6b833 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for database/e3b3bb98-1e73-41cf-be32-324280615a3b (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for compute/0820e2d7-998a-405a-bbea-e73a76ec26b4 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for blobstore/5c3297a4-4ad0-43bc-8041-000ca8e38e28 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for control/bf60350d-aab0-4db3-8cc2-05d666f5f3a8 (0): Finding and packing log files
Task 784 | 17:54:42 | Fetching logs for compute/0820e2d7-998a-405a-bbea-e73a76ec26b4 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for router/8828d2fd-10f6-4d1e-9a3f-ac3d6ef6b833 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for database/e3b3bb98-1e73-41cf-be32-324280615a3b (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for blobstore/5c3297a4-4ad0-43bc-8041-000ca8e38e28 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for control/bf60350d-aab0-4db3-8cc2-05d666f5f3a8 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:43 | Fetching group of logs: Packing log files together
Task 784 Started Sat May 9 17:54:41 UTC 2020
Task 784 Finished Sat May 9 17:54:43 UTC 2020
Task 784 Duration 00:00:02
Task 784 done
Downloading resource 'f7d8c6d3-43f8-419a-a436-53a38155af47' to '/Users/my-mac/workspace/pcf-dev/cf-66ade9481d314315358c-20200509-195443-771607.tgz'...
0.00%
Succeeded
Unpack your downloaded log archive.

Problem App Engine app to connect to MySQL in CloudSQL

I've configured SQL second gen. instance and App Engine application (Python 2.7) in one project. I've made necessary settings according to that page.
app.yaml
runtime: python27
api_version: 1
threadsafe: true
env_variables:
CLOUDSQL_CONNECTION_NAME: coral-heuristic-215610:us-central1:db-basic-1
CLOUDSQL_USER: root
CLOUDSQL_PASSWORD: xxxxxxxxx
beta_settings:
cloud_sql_instances: coral-heuristic-215610:us-central1:db-basic-1
libraries:
- name: lxml
version: latest
- name: MySQLdb
version: latest
handlers:
- url: /main
script: main.app
Now as I try to connect from the app (inside Cloud Shell), the error:
OperationalError: (2002, 'Can\'t connect to local MySQL server through socket \'/var/run/mysqld/mysqld.sock\' (2 "No such file or directory")')
Direct connection works:
$ gcloud sql connect db-basic-1 --user=root
was successful...
MySQL [correction_dict]> SHOW PROCESSLIST;
+--------+------+----------------------+-----------------+---------+------+----------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+--------+------+----------------------+-----------------+---------+------+----------+------------------+
| 9 | root | localhost | NULL | Sleep | 4 | | NULL |
| 10 | root | localhost | NULL | Sleep | 4 | | NULL |
| 112306 | root | 35.204.173.246:59210 | correction_dict | Query | 0 | starting | SHOW PROCESSLIST |
| 112357 | root | localhost | NULL | Sleep | 4 | | NULL |
| 112368 | root | localhost | NULL | Sleep | 0 | | NULL |
+--------+------+----------------------+-----------------+---------+------+----------+------------------+
I've authorized IP to connect to the Cloud SQL instance:
Any hints, help?
Google AppEngine Standard provides a unix socket at /cloudsql/[INSTANCE_CONNECTION_NAME] that automatically connects you to your CloudSQL instance. All you need to do is connect to it at that address. For the MySQLDb library, that looks like this:
db = MySQLdb.connect(
unix_socket=cloudsql_unix_socket,
user=CLOUDSQL_USER,
passwd=CLOUDSQL_PASSWORD)
(If you are running AppEngine Flexible, connecting is different and can be found here)

Dokku app not deploying. Can anyone help me make sense of the logs?

I've been struggling with deploying an app on Dokku since yesterday. I've been able to deploy two others on the same PaaS platform but for some reason, this one seems to be giving issues.
Right now, I can't even make sense of these logs.
11:30:52 rake.1 | started with pid 12
11:30:52 console.1 | started with pid 14
11:30:52 web.1 | started with pid 16
11:30:52 worker.1 | started with pid 18
11:31:30 worker.1 | [Worker(host:134474ed9b8c pid:18)] Starting job worker
11:31:30 worker.1 | 2015-09-21T11:31:30+0000:[Worker(host:134474ed9b8c pid:18)] Starting job worker
11:31:31 worker.1 | Delayed::Backend::ActiveRecord::Job Load (9.8ms) UPDATE "delayed_jobs" SET locked_at = '2015-09-21 11:31:31.090080', locked_by = 'host:134474ed9b8c pid:18' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2015-09-21 11:31:30.694648' AND (locked_at IS NULL OR locked_at < '2015-09-21 07:31:30.694715') OR locked_by = 'host:134474ed9b8c pid:18') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *
11:31:32 console.1 | Loading production environment (Rails 4.2.0)
11:31:33 web.1 | [2015-09-21 11:31:33] INFO WEBrick 1.3.1
11:31:33 web.1 | [2015-09-21 11:31:33] INFO ruby 2.0.0 (2015-04-13) [x86_64-linux]
11:31:33 web.1 | [2015-09-21 11:31:33] INFO WEBrick::HTTPServer#start: pid=20 port=5200
11:31:33 rake.1 | Abort testing: Your Rails environment is running in production mode!
11:31:33 console.1 | Switch to inspect mode.
11:31:33 console.1 |
11:31:33 console.1 | exited with code 0
11:31:33 system | sending SIGTERM to all processes
11:31:33 worker.1 | [Worker(host:134474ed9b8c pid:18)] Exiting...
11:31:33 worker.1 | 2015-09-21T11:31:33+0000: [Worker(host:134474ed9b8c pid:18)] Exiting...
11:31:33 rake.1 | exited with code 1
11:31:33 web.1 | terminated by SIGTERM
11:31:36 worker.1 | SQL (1.6ms) UPDATE "delayed_jobs" SET "locked_by" = NULL, "locked_at" = NULL WHERE "delayed_jobs"."locked_by" = $1 [["locked_by", "host:134474ed9b8c pid:18"]]
11:31:36 worker.1 | exited with code 0
I really would appreciate if anyone could help catch what I'm doing wrong. Thanks.

NoHttpResponseException on uploading file to S3 (camel-aws)

I am trying to upload around 10 GB file from my local machine to S3 (inside a camel route). Although file gets uploaded in around 3-4 minutes, but it also throwing following exception:
2014-06-26 13:53:33,417 | INFO | ads.com/outbound | FetchRoute | 167 - com.ut.ias - 2.0.3 | Download complete to local. Pushing file to S3
2014-06-26 13:54:19,465 | INFO | manager-worker-6 | AmazonHttpClient | 144 - org.apache.servicemix.bundles.aws-java-sdk - 1.5.1.1 | Unable to execute HTTP request: The target server failed to respond
org.apache.http.NoHttpResponseException: The target server failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:95)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:62)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:254)[141:org.apache.httpcomponents.httpcore:4.2.4]
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:289)[141:org.apache.httpcomponents.httpcore:4.2.4]
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:252)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:191)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:300)[141:org.apache.httpcomponents.httpcore:4.2.4]
.......
at java.util.concurrent.FutureTask.run(FutureTask.java:262)[:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_55]
at java.lang.Thread.run(Thread.java:744)[:1.7.0_55]
2014-06-26 13:55:08,991 | INFO | ads.com/outbound | FetchRoute | 167 - com.ut.ias - 2.0.3 | Upload complete.
Due to which camel route doesn't stop and it is continuously throwing InterruptedException:
2014-06-26 13:55:11,182 | INFO | ads.com/outbound | SftpOperations | 110 - org.apache.camel.camel-ftp - 2.12.1 | JSCH -> Disconnecting from cxportal.integralads.com port 22
2014-06-26 13:55:11,183 | INFO | lads.com session | SftpOperations | 110 - org.apache.camel.camel-ftp - 2.12.1 | JSCH -> Caught an exception, leaving main loop due to Socket closed
2014-06-26 13:55:11,183 | WARN | lads.com session | eventadmin | 139 - org.apache.felix.eventadmin - 1.3.2 | EventAdmin: Exception: java.lang.InterruptedException
java.lang.InterruptedException
at EDU.oswego.cs.dl.util.concurrent.LinkedQueue.offer(Unknown Source)[139:org.apache.felix.eventadmin:1.3.2]
at EDU.oswego.cs.dl.util.concurrent.PooledExecutor.execute(Unknown Source)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.tasks.DefaultThreadPool.executeTask(DefaultThreadPool.java:101)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.tasks.AsyncDeliverTasks.execute(AsyncDeliverTasks.java:105)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.handler.EventAdminImpl.postEvent(EventAdminImpl.java:100)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.adapter.LogEventAdapter$1.logged(LogEventAdapter.java:281)[139:org.apache.felix.eventadmin:1.3.2]
at org.ops4j.pax.logging.service.internal.LogReaderServiceImpl.fire(LogReaderServiceImpl.java:134)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.LogReaderServiceImpl.fireEvent(LogReaderServiceImpl.java:126)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.PaxLoggingServiceImpl.handleEvents(PaxLoggingServiceImpl.java:180)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.PaxLoggerImpl.inform(PaxLoggerImpl.java:145)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.internal.TrackingLogger.inform(TrackingLogger.java:86)[18:org.ops4j.pax.logging.pax-logging-api:1.7.1]
at org.ops4j.pax.logging.slf4j.Slf4jLogger.info(Slf4jLogger.java:476)[18:org.ops4j.pax.logging.pax-logging-api:1.7.1]
at org.apache.camel.component.file.remote.SftpOperations$JSchLogger.log(SftpOperations.java:359)[110:org.apache.camel.camel-ftp:2.12.1]
at com.jcraft.jsch.Session.run(Session.java:1621)[109:org.apache.servicemix.bundles.jsch:0.1.49.1]
at java.lang.Thread.run(Thread.java:744)[:1.7.0_55]
Please see my code below and let me know, where I am going wrong:
TransferManager tm = new TransferManager(
S3Client.getS3Client());
// TransferManager processes all transfers asynchronously,
// so this call will return immediately.
Upload upload = tm.upload(
Utils.getProperty(Constants.BUCKET),
getS3Key(file.getName()), file);
try {
upload.waitForCompletion();
logger.info("Upload complete.");
} catch (AmazonClientException amazonClientException) {
logger.warn("Unable to upload file, upload was aborted.");
amazonClientException.printStackTrace();
}
The stacktrace doesn't even have any reference to my code, hence couldn't determine where the issue is.
Any help or pointer would be really appreciated.
Thanks

What are the potential status codes for AWS Auto Scaling Activities?

Here is the documentation for the Activity data type.
However, I think I've seen 4 status codes for the responses:
'Successful'
'Cancelled'
'InProgress'
'PreInProgress'
Are there any others?
Looks like they have updated the documentation, in the same url you have shared:
Valid Values: WaitingForSpotInstanceRequestId | WaitingForSpotInstanceId | WaitingForInstanceId | PreInService | InProgress | Successful | Failed | Cancelled