Could not create listener socket on port 8000 - icecast

This is the error when trying to run icecast2 from the serving using the terminal (command line): sudo icecast2 -b -c /etc/icecast2/icecast.xml
Error message:
EROR connection/connection_setup_sockets Could not create listener socket on
port 8000
EROR connection/connection_setup_sockets No listening sockets established
Server startup failed. Exiting
I can load the icecast server from browser BUT no mount points. Please help.

Sounds like Icecast is already running.
Why are you trying to start it manually? You should use systemd or the init script that comes with the Icecast package of your distribution.

Related

Connecting to Tensorboard Logs on a Remote Server: The Connection was Reset

When trying to connect to tensorboard logs on a remote server and entering the address http://localhost:16006/ in chrome and firefox I get this message in the command line "channel 3: open failed: connect failed: Connection refused" multiple times and this message on the browser, "The Connection was Reset".
I ssh into the server like this: ssh -L 16006:127.0.0.1:6006 username#machine and then go one level up the log folder and write: tensorboard --logdir logs --port 16006
I tired:
tensorboard --logdir logs --port 16006 --bind-all
and also
tensorboard --logdir logs --host localhost, also
tensorboard --logdir logs --host 127.0.0.1
None of the above has wroked. I tried running the line above from another environment which didn't help. I went to the office and tried connecting to the logs from the server machine directly and it worked.
It used to work before when accessing remotely. Do you know what the problem is? any hint would be immensely appreciated.
I am having the same problem, but I think you should designate --port 6006 since it looks like you are forwarding remote port 6006 to the local port 16006.
Even so, since 6006 is the default port, the other commands should have worked, but you would have to go to http://127.0.0.1:16006 on your local machine, rather than the link it provides.
Some more in depth explanations can be found here how to run tensorboard on a remote server and how to see tensorboard over ssh
Even following this advice though I am still getting a 'channel 3: open failed: connect failed: Connection failed' error

How to run daphne in localhost with https and mkcert

I am trying to run a django-channels project locally using https (the app has a facebook login that requires https).
I have followed the instructions for generating a key and certificate using mkcert ( https://github.com/FiloSottile/mkcert ) and have attempted to use the key and certificate by running daphne -e ssl:443:privateKey=localhost+1-key.pem:certKey=localhost+1.pem django_project.asgi:application -p 8000 -b 0.0.0.0
The server seems to be starting OK however when I try to visit https://0.0.0.0:8000 nothing happens and eventually I get a 'took too long to respond' message.
No new output is added to the standard daphne output that appears when I start up the server:
2019-07-16 19:23:27,818 INFO HTTP/2 support enabled
2019-07-16 19:23:27,818 INFO Configuring endpoint ssl:8443:privateKey=../sec/localhost+1-key.pem:certKey=../sec/localhost+1.pem
2019-07-16 19:23:27,823 INFO Listening on TCP address 0.0.0.0:8443
2019-07-16 19:23:27,823 INFO Configuring endpoint tcp:port=8000:interface=0.0.0.0
2019-07-16 19:23:27,824 INFO Listening on TCP address 0.0.0.0:8000
Can anyone help with this?
You should map the 8000 host port to port 443 of the container while runnig the server.
docker run ... -p 8000:443 ...
Turns out that setting up the Twisted ssl stuff overrides the port that you're setting up in daphne, so in the example above, the site would be shown on port 443

gRPC C++, client: "14: Connect Failed"

We are running "helloworld" example from https://grpc.io/docs/quickstart/cpp.html#update-a-grpc-service and we received the following ERROR:
14: Connect Failed
Greeter received: RPC failed.
The server and the client are listening on: 0.0.0.0:50051. The Server is running.
First we receive just a packet on the server and the client crashes, I checked it with tcpdump. We checked on different hosts as well as on the same host but it didn't work for either cases.
Should we change a different IP or different Port number?
I got the same issue on my PC(OS: ubuntu 16.04 LTS, protobuf 3.4.0)
so I search for the reason and I found this:
Reason
If on a linux machine, the environment has the usual "http_proxy" environment variable configured, gRPC will take that into account when trying to connect, however, will then proceed to ignore the companion no_proxy setting:
For example:
$ env
http_proxy=http://106.1.216.121:8080
no_proxy=localhost,127.0.0.1
$ ./greeter_client
D0306 16:00:11.419586349 1897 combiner.c:351] C:0x25a9290 finish old_state=3
D0306 16:00:11.420527744 1896 tcp_client_posix.c:179] CLIENT_CONNECT: ipv4:106.1.216.121:8080: on_writable: error="No Error"
D0306 16:00:11.420567382 1896 combiner.c:145] C:0x25a69a0 create
D0306 16:00:11.420581887 1896 tcp_client_posix.c:119] CLIENT_CONNECT: ipv4:106.1.216.121:8080: on_alarm: error="Cancelled"
I0306 16:00:11.420617663 1896 http_connect_handshaker.c:319] Connecting to server 127.0.0.1:50051 via HTTP proxy ipv4:106.1.216.121:8080
Basically, it's using the http_proxy url to connect even though localhost is in the no_proxy list. Since the default for no_proxy includes localhost on most linux machines; the end result is that any user with an http_proxy configured will never be able to connect to localhost. --- [1]
Other solution
You can enable grpc tracing with
export GRPC_TRACE=all && ./greeter_server and same thing for the client.
Verification
Terminal 1
Terminal 2
That should do the trick
ps. for more information about GRPC_TRACE - gRPC environment variables
Reference
gRPC doesn't respect the no_proxy environment variable

haproxy in docker container

I'm new to docker and haproxy.. I tried to follow the example from the official docker hub repo.
So, I have Dockerfile
FROM haproxy:1.5
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
and simple haproxy config (which I expect to redirect local calls to my EB instance)
global
# daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
backend servers
server server1 {my-app}.elasticbeanstalk.com:80 maxconn 32
Build and run
$ docker build .
$ docker run --rm d4598bcc293f
Container starts and stucks, Ctrl+C doen't stop it. "docker kill" helps only.
My EB resource is up and running
$ curl {my-app}.elasticbeanstalk.com/status
{
"status": "OK"
}
But local calls fail
$ boot2docker ip
192.168.59.104
$ curl 192.168.59.104/status
curl: (7) Failed to connect to 192.168.59.104 port 80: Connection refused
What am I missing or doing wrong?
Thank you!
UPDATE: I've found the problem with calls redirections. Wrong port
number in haproxy.cfg.
But this problem still annoys me... Container starts and stucks,
Ctrl+C doen't stop it. "docker kill" helps only.
If you want to be able to exit with control-c, do docker run -i <image>. The -i means to pass input to the containerized program, and if HAProxy gets a control-c then it will terminate which will stop the container.
HAProxy doesn't produce any output unless you run it in debug mode, so there's not really much point to running attached, though. You might have a better time with docker run -d <image>, which will detach from the container and let it run in the background. To stop it, use docker kill.

Installing and Viewing Neo4j on Existing AWS EC2 Instance

I'm trying to install the enterprise edition of neo4j on an existing EC2 (Amazon linux) instance. So far I've
wget "link to enterprise"
untar the file
renamed and moved the folder to NEO4J_HOME
then went into the config files for neo4j.properties to make the following changes:
# Enable shell server so that remote clients can connect via Neo4j shell.
remote_shell_enabled=true
# The network interface IP the shell will listen on (use 0.0.0 for all interfaces)
remote_shell_host=127.0.0.1
# The port the shell will listen on, default is 1337
remote_shell_port=1337
EDITED Christophe Willemsen pointed out that for my original error, I had forgotten to restart the server at that point but I was still unable to access the web server while it was running. So to make it more clear, I've edited the remaining post:
I went to neo4j-server.properties and uncommented:
org.neo4j.server.webserver.address=0.0.0.0
And start the server
NEO4J_HOME/bin/neo4j start
WARNING: Max 1024 open files allowed, minimum of 40 000 recommended. See the Neo4j manual.
Using additional JVM arguments: -server -XX:+DisableExplicitGC -Dorg.neo4j.server.properties=conf/neo4j-server.properties -Djava.util.logging.config.file=conf/logging.properties -Dlog4j.configuration=file:conf/log4j.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:-OmitStackTraceInFastThrow
Starting Neo4j Server...WARNING: not changing user
process [28557]... waiting for server to be ready..... OK.
http://localhost:7474/ is ready.
checking the status:
NEO4J_HOME/bin/neo4j status
Neo4j Server is running at pid 28557
I can run the shell but the when I go to localhost 7474 I still can not connect
Any help would be appreciative. The only tutorial or help I've found assumed I was starting from scratch with a new instance. If someone could provide some instructions for installing or fix my configuration that would be great.
Thanks!
You have to edit neo4j-server.properties and uncomment the line with:
org.neo4j.server.webserver.address=0.0.0.0
So that the db listens on an external interface not just localhost, and you have to open the port (7474) in your firewall rules.
Make sure to secure access to the db though:
http://neo4j.com/docs/stable/security-server.html