Problems getting RabbitMQ and Django-Celery Running: Target Machine actively refused connection - django

I am trying to get Django-Celery running on my Django App. I cannot get the worker server to run. When I try I get the message: No Connection could be made because the target machine actively refused it
Here is what I have done so far. First, I installed the django celery package: http://pypi.python.org/pypi/django-celery
I can load it into python without problems. I also installed the RabbitMQ server per the windows install instructions: http://www.rabbitmq.com/install.html#windows
Starting the tutorials in pytho on the RabbitMQ site I saw the need to install pika: http://pypi.python.org/pypi/pika. It imports without any problems.
From there I start the RabbitMQ server by running this at the command line: rabbitmq-service start
I get the message back that Service RabbitMQ started
Here is where I start to have problems.
I attempted the first steps in django-celery: http://packages.python.org/django-celery/getting-started/first-steps-with-django.html and the "hello world" example on the rabbitMQ site: http://www.rabbitmq.com/tutorials/tutorial-one-python.html
In both cases I get the message: No Connection could be made because the target machine actively refused it
My first thought was that this sounded like a firewall problem. So I went into the windows 7 firewall and added inbound and outbound rules to open the local and remote ports 5672 and 5673 to TCP protocol, but I still get the same error message.
When I run rabbitmqctl status i get the message:
Error: unable to connect to node 'rabbit#hostname': nodedown
diagnostics:
- nodes and their ports on hostname: [{rabbitmqctl18856, 505031}]
Does that mean it that it is trying to operate on those ports? what about the default 5672?
Any suggestions?
UPDATE: This was actually a problem resulting from several failed rabbitmq installs conflicting with the latest installation. If you have to remove rabbitmq use the 'rabbitmq-service remove' command and not SC DELETE, which cause a lot of problems for me and I had to go in and clean up my windows registry file.

The nodedown error indicated by rabbitmqctl suggests that the server isn't running on that machine.
Try going though the steps in RabbitMQ's troubleshooting guide. In particular, pay close attention to the logs. Has the server crashed for some reason? Could you post the logs somewhere?

Related

AWS Ubuntu SSH suddenly disconnects and can't reconnect

I created a new micro instance of Ubuntu Server 20.04 and connected with putty successfully. I did some work on the server, setting up nginx and other things. Out of the blue, my connection got disconnected and then when I try to reconnect using putty, I get a connection timeout. I spent a while trying to figure out what was wrong and gave up, so I created a new instance and did some work, and after about an hour, the same thing happened, disconnected and can't reconnect with a timeout error. What could cause this? How do I even troubleshoot?
So in the instructions for configuring NGINX on Ubuntu it gives instructions for configuring UFW, and says I needed to allow HTTP and HTTPS but made no mention of SSH, so I allowed SSH "ufw allow ssh" and then set ENABLED=yes in /etc/ufw/ufw.conf, and so far so good.

Cloud Foundry cli i/o timeout

I was able to successfully deploy BOSH and CF on GCP. I was able to install the cf cli on my worker machine and was able to cf login to the api endpoint without any issues. Now I am attempting to deploy a python and a node.js hello-world style application (cf push) but I am running into the following error:
Python:
**ERROR** Could not install python: Get https://buildpacks.cloudfoundry.org/dependencies/python/python-3.5.4-linux-x64-5c7aa3b0.tgz: dial tcp: lookup buildpacks.cloudfoundry.org on 169.254.0.2:53: read udp 10.255.61.196:36513->169.254.0.2:53: i/o timeout
Failed to compile droplet: Failed to run all supply scripts: exit status 14
NodeJS
-----> Nodejs Buildpack version 1.6.28
-----> Installing binaries
engines.node (package.json): unspecified
engines.npm (package.json): unspecified (use default)
**WARNING** Node version not specified in package.json. See: http://docs.cloudfoundry.org/buildpacks/node/node-tips.html
-----> Installing node 6.14.3
Download [https://buildpacks.cloudfoundry.org/dependencies/node/node-6.14.3-linux-x64-ae2a82a5.tgz]
**ERROR** Unable to install node: Get https://buildpacks.cloudfoundry.org/dependencies/node/node-6.14.3-linux-x64-ae2a82a5.tgz: dial tcp: lookup buildpacks.cloudfoundry.org on 169.254.0.2:53: read udp 10.255.61.206:34802->169.254.0.2:53: i/o timeout
Failed to compile droplet: Failed to run all supply scripts: exit status 14
I am able to download and ping the build pack urls manually on the worker machine, jumpbox, and the bosh vms so I believe DNS is working properly on each of those machine types.
As part of the default deployment, I believe a socks5 tunnel is created to allow communication from my worker machine to the jumpbox so this is where I believe the issue lies. https://docs.cloudfoundry.org/cf-cli/http-proxy.html
When running bbl print-env, export BOSH_ALL_PROXY=ssh+socks5://jumpbox#35.192.140.0:22?private-key=/tmp/bosh-jumpbox725514160/bosh_jumpbox_private.key , however when I export https_proxy=socks5://jumpbox#35.192.140.0:22?private-key=/tmp/bosh-jumpbox389236516/bosh_jumpbox_private.key and do a cf push I receive the following error:
Request error: Get https://api.cloudfoundry.costub.com/v2/info: proxy: SOCKS5 proxy at 35.192.140.0:22 has unexpected version 83
TIP: If you are behind a firewall and require an HTTP proxy, verify the https_proxy environment variable is correctly set. Else, check your network connection.
FAILED
Am I on the right track? Is my https_proxy variable formatted correctly? I also tried https_proxy=socks5://jumpbox#35.192.140.0:22 with the same result.

504 gateway timeout flask socketio

I am working on a flask-socketio server which is getting stuck in a state where only 504s (gateway timeout) are returned. We are using AWS ELB in front of the server. I was wondering if anyone wouldn't mind giving some tips as to how to debug this issue.
Other symptoms:
This problem does not occur consistently, but once it begins happening, only 504s are received from requests. Restarting the process seems to fix the issue.
When I run netstat -nt on the server, I see many entries with rec-q's of over 100 stuck in the CLOSE_WAIT state
When I run strace on the process, I only see select and clock_gettime
When I run tcpdump on the server, I can see the valid requests coming into the server
AWS health checks are coming back succesfully
EDIT:
I should also add two things:
flask-socketio's server is used for production (not gunicorn or uWSGI)
Python's daemonize function is used for daemonizing the app
It seemed that switching to gunicorn as the wsgi server fixed the problem. This legitimately might be an issue with the flask-socketio wsgi server.

Installing and Viewing Neo4j on Existing AWS EC2 Instance

I'm trying to install the enterprise edition of neo4j on an existing EC2 (Amazon linux) instance. So far I've
wget "link to enterprise"
untar the file
renamed and moved the folder to NEO4J_HOME
then went into the config files for neo4j.properties to make the following changes:
# Enable shell server so that remote clients can connect via Neo4j shell.
remote_shell_enabled=true
# The network interface IP the shell will listen on (use 0.0.0 for all interfaces)
remote_shell_host=127.0.0.1
# The port the shell will listen on, default is 1337
remote_shell_port=1337
EDITED Christophe Willemsen pointed out that for my original error, I had forgotten to restart the server at that point but I was still unable to access the web server while it was running. So to make it more clear, I've edited the remaining post:
I went to neo4j-server.properties and uncommented:
org.neo4j.server.webserver.address=0.0.0.0
And start the server
NEO4J_HOME/bin/neo4j start
WARNING: Max 1024 open files allowed, minimum of 40 000 recommended. See the Neo4j manual.
Using additional JVM arguments: -server -XX:+DisableExplicitGC -Dorg.neo4j.server.properties=conf/neo4j-server.properties -Djava.util.logging.config.file=conf/logging.properties -Dlog4j.configuration=file:conf/log4j.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:-OmitStackTraceInFastThrow
Starting Neo4j Server...WARNING: not changing user
process [28557]... waiting for server to be ready..... OK.
http://localhost:7474/ is ready.
checking the status:
NEO4J_HOME/bin/neo4j status
Neo4j Server is running at pid 28557
I can run the shell but the when I go to localhost 7474 I still can not connect
Any help would be appreciative. The only tutorial or help I've found assumed I was starting from scratch with a new instance. If someone could provide some instructions for installing or fix my configuration that would be great.
Thanks!
You have to edit neo4j-server.properties and uncomment the line with:
org.neo4j.server.webserver.address=0.0.0.0
So that the db listens on an external interface not just localhost, and you have to open the port (7474) in your firewall rules.
Make sure to secure access to the db though:
http://neo4j.com/docs/stable/security-server.html

Ejabberd server not getting started?

I have re-installed ejabberd server in my localhost.When i run sudo service ejabberd restart its no getting restarted.Instead its craeting error.The following error is shown in erl_crash.dump.All my configurations in conf file is correct.
Kernel pid terminated (application_controller) ({application_start_failure,kernel,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{k
I tried everything also killed process running on same ports.I there anything else to do to solve this issue ???
Solved the issue problem wan with my settings,and then restarted the server
using sudo service ejabbers restart.It worked