Error in Capistrano deployment in localhost - ruby-on-rails-4

I am deploying my application on localhost using capistrano , but getting the below error:
INFO [5f197b14] Running /usr/bin/env mkdir -p /tmp/promo_app/ as chakreshwar#localhost
DEBUG [5f197b14] Command: /usr/bin/env mkdir -p /tmp/promo_app/
(Backtrace restricted to imported tasks)
cap aborted!
Errno::ECONNREFUSED: Connection refused - connect(2) for 127.0.0.1:22
I am using the below gem for capistrano
gem 'capistrano'
gem 'capistrano-ext'
Below is the code of Deploy.rb
# config valid only for current version of Capistrano
lock '3.4.0'
set :application, 'my_app'
set :repo_url, '/home/test/git_server/test_app.git'
set :deploy_to, '/home/test/projects/capistrano_deployment/my_app'
set :scm, :git
set :format, :pretty
# Default value for :pty is false
set :pty, true
set :default_stage, "staging"
namespace :deploy do
after :restart, :clear_cache do
on roles(:web), in: :groups, limit: 3, wait: 10 do
end
end
end
Below is my staging.rb:
server 'localhost', user: 'username', roles: %w{app db web}#
:other_value
role :app, %w{localhost}#, my_property: :my_value
role :web, %w{localhost}#, other_property: :other_value
role :db, %w{localhost}
Please tell if anything missed in it .

The error indicates that you cannot SSH to the destination box, in this case localhost. Try ssh 127.0.0.1, and make sure that works. The deployment should execute once that works.
In regards to your general config, a couple of notes:
The capistrano-ext gem is obsolete, you can remove that.
In staging.rb, you have duplicate directives. You should probably remove the lines beginning with role in favor of the line beginning with server.
In staging.rb, make sure that username: is set to the SSH user with which you will log in.
Good luck!

Maybe you're missing an SSH server to connect on your on machine because you only have the client.
If you can't do ssh 127.0.0.1, use:
sudo apt-get install openssh-server
To install the ssh server

Related

Trouble Connecting to PostgreSQL Running in a Ubuntu VM

I have created an instance of PostgreSQL running in a Ubuntu/Bionic box in Vagrant/VirtualBox that will be used by Django in my dev environment. I wanted to test my ability to connect to it with either the terminal or pgAdmin before connecting with DJango, just to be sure it was working on that end first; the idea being that I could make later Django debugging easier if I am assured the connection works; but, I've had no success.
I have tried editing the configuration files that many posts suggest, with no effect. I can, however, ping the box via the ip assigned in the Vagrantfile with no issue - but not when specifying port 5432 with ping 10.1.1.1:5432. I can also use psql from within the box, so it's running.
I have made sure to enable ufw on the vm, created a rule to allow port 5432 and insured that it took using sudo ufw status. I have also confirmed that I'm editing the correct files using the show command within psql.
Here are the relevant configs as they currently are:
Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.hostname = "hg-site-db"
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 1
end
config.vm.box = "ubuntu/bionic64"
config.vm.network "forwarded_port", host_ip: "127.0.0.1", guest: 5432, host: 5432
config.vm.network "public_network", ip: "10.1.1.1"
config.vm.provision "shell", inline: <<-SHELL
# Update and upgrade the server packages.
sudo apt-get update
sudo apt-get -y upgrade
# Install PostgreSQL
sudo apt-get install -y postgresql postgresql-contrib
# Set Ubuntu Language
sudo locale-gen en_US.UTF-8
SHELL
end
/etc/postgresql/10/main/postgresql.conf:
listen_addresses = '*'
/etc/postgresql/10/main/pg_hba.conf - I am aware this is insecure, but I was just trying to find out why it was not working, with plans to go back and correct this:
host all all 0.0.0.0/0 trust
As we discussed in comments, you should remove host_ip from your forwarded port definition and just leave the guest and host ports.

Travis CI Deploy by SSH Script/Hosts issue

I have a django site which I would like to deploy to a Digital Ocean server everytime a branch is merged to master. I have it mostly working and have followed this tutorial.
.travis.yml
language: python
python:
- '2.7'
env:
- DJANGO_VERSION=1.10.3
addons:
ssh_known_hosts: mywebsite.com
git:
depth: false
before_install:
- openssl aes-256-cbc -K *removed encryption details* -in travis_rsa.enc -out travis_rsa -d
- chmod 600 travis_rsa
install:
- pip install -r backend/requirements.txt
- pip install -q Django==$DJANGO_VERSION
before_script:
- cp backend/local.env backend/.env
script: python manage.py test
deploy:
skip_cleanup: true
provider: script
script: "./travis-deploy.sh"
on:
all_branches: true
travis-deploy.sh - runs when the travis 'deploy' task calls it
#!/bin/bash
# print outputs and exit on first failure
set -xe
if [ $TRAVIS_BRANCH == "master" ] ; then
# setup ssh agent, git config and remote
echo -e "Host mywebsite.com\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
eval "$(ssh-agent -s)"
ssh-add travis_rsa
git remote add deploy "travis#mywebsite.com:/home/dean/se_dockets"
git config user.name "Travis CI"
git config user.email "travis#mywebsite.com"
git add .
git status # debug
git commit -m "Deploy compressed files"
git push -f deploy HEAD:master
echo "Git Push Done"
ssh -i travis_rsa -o UserKnownHostsFile=/dev/null travis#mywebsite.com 'cd /home/dean/se_dockets/backend; echo hello; ./on_update.sh'
else
echo "No deploy script for branch '$TRAVIS_BRANCH'"
fi
Everything works find until things get to the 'deploy' stage. I keep getting error messages like:
###########################################################
# WARNING: POSSIBLE DNS SPOOFING DETECTED! #
###########################################################
The ECDSA host key for mywebsite.com has changed,
and the key for the corresponding IP address *REDACTED FOR STACK OVERFLOW*
is unknown. This could either mean that
DNS SPOOFING is happening or the IP address for the host
and its host key have changed at the same time.
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
* REDACTED FOR STACK OVERFLOW *
Please contact your system administrator.
Add correct host key in /home/travis/.ssh/known_hosts to get rid of this message.
Offending RSA key in /home/travis/.ssh/known_hosts:11
remove with: ssh-keygen -f "/home/travis/.ssh/known_hosts" -R mywebsite.com
Password authentication is disabled to avoid man-in-the-middle attacks.
Keyboard-interactive authentication is disabled to avoid man-in-the-middle attacks.
Permission denied (publickey,password).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Script failed with status 128
INTERESTINGLY - If I re-run this job the 'git push' command will succeed at pushing to the deploy remote (my server). However, the next step in the deploy script which is to SSH into the server and run some postupdate commands will fail for the same reason (hosts fingerprint change or something). Or, it will ask for travis#mywebsite.com password (it has none) and will hang on the input prompt.
Additionally when I debug the Travis CI build and use the SSH url you're given to SSH into the machine Travis CI runs on - I can SSH into my own server from it. However it takes multiple tries to get around the errors.
So - this seems to be a fluid problem with stuff persisting from builds into the next on retries causing different errors/endings.
As you can see in my .yml file and the deploy script I have attempted to disable various host name checks and added the domain to known hosts etc... all to no avail.
I know I have things 99% set up correctly as things do mostly succeed when I retry the job a few times.
Anyone seen this before?
Cheers,
Dean

Django Celery cannot connect to remote RabbitMQ on EC2

I created a rabbitmq cluster on two instances on EC2. My django app uses celery for async tasks which in turn uses RabbitMQ for message queue.
Whenever I start celery with the command:
python manage.py celery worker --loglevel=INFO
OR
python manage.py celeryd --loglevel=INFO
I keep getting following error message related to remote RabbitMQ:
[2015-05-19 08:58:47,307: ERROR/MainProcess] consumer: Cannot connect to amqp://myuser:**#<ip-address>:25672/myvhost/: Socket closed.
Trying again in 2.00 seconds...
I set permissions using:
sudo rabbitmqctl set_permissions -p myvhost myuser ".*" ".*" ".*"
and then restarted rabbitmq-server on both the cluster nodes. However, it didn't help.
In log file, I see few entries like below:
=INFO REPORT==== 19-May-2015::08:14:41 ===
accepting AMQP connection <0.1981.0> (<ip-address>:38471 -> <ip-address>:5672)
=ERROR REPORT==== 19-May-2015::08:14:44 ===
closing AMQP connection <0.1981.0> (<ip-address>:38471 -> <ip-address>:5672):
{handshake_error,opening,0,
{amqp_error,access_refused,
"access to vhost 'myvhost' refused for user 'myuser'",
'connection.open'}}
The file /usr/local/etc/rabbitmq/rabbitmq-env.conf contains an entry for NODE_IP_ADDRESS to bind it only to localhost. Removing the NODE_IP_ADDRESS entry from the config binds the port to all network inferfaces.
Source: https://superuser.com/questions/464311/open-port-5672-tcp-for-access-to-rabbitmq-on-mac
Turns out I had not created appropriate configuration files. In my case (Ubuntu 14.04), I had to create below two configuration files:
$ cat /etc/rabbitmq/rabbitmq-env.conf
RABBITMQ_NODE_IP_ADDRESS=<ip_of_ec2_instance>
<ip_of_ec2_instance> has to be the internal IP that EC2 uses. Not the public IP that one uses to ssh into the instance. It can be obtained using ip a command.
$ cat /etc/rabbitmq/rabbitmq.config
[
{mnesia, [{dump_log_write_threshold, 1000}]},
{rabbit, [{tcp_listeners, [25672]}]},
{rabbit, [{loopback_users, []}]}
].
I think the line {rabbit, [{tcp_listeners, [25672]}]}, was one of the most important piece of configuration that I was missing.
Thanks #dgil for the initial troubleshooting help.
The question has been answered. but just leaving notes with a similar issue i faced should anybody else find it useful
I have a flask app running on ec2 with amqp as a broker on port 5672 and ec2 elasticcache memcached as a backend. The amqp broker had trouble picking up tasks that were getting fired - so i resolved it by fixing as such
Assuming you have rabbitmq-server installed (sudo apt-get install rabbitmq-server), add the user and set the properties as such
sudo add_user username password
set_permissions username ".*" ".*" ".*"
restart server: sudo service rabbitmq-server restart
In your flask app for the celery configuration
broker_url=amqp://username:password#localhost:5672// (Set as above)
backend=cache+memcached://(ec2 cache url):11211/
(The cache+memcached:// tripped me up - without it i kept getting an import error (cannot import module)
Open up the port 5672 on your ec2 instance in the security group.
Now if you fire up your celery worker, it should pick up the the tasks that get fired and store the results on your memcached server

Capistrano git:check failed exit status 128

I got this when I ran cap production git:check. (I took out my real ip address and user name)
DEBUG [4d1face0] Running /usr/bin/env git ls-remote -h foo#114.215.183.110:~/git/deepot.git on 114.***.***.***
DEBUG [4d1face0] Command: ( GIT_ASKPASS=/bin/echo GIT_SSH=/tmp/deepot/git-ssh.sh /usr/bin/env git ls-remote -h foo#114.***.***.***:~/git/deepot.git )
DEBUG [4d1face0] Error reading response length from authentication socket.
DEBUG [4d1face0] Permission denied (publickey,password).
DEBUG [4d1face0] fatal: The remote end hung up unexpectedly
DEBUG [4d1face0] Finished in 0.715 seconds with exit status 128 (failed).
Below is my deploy file...
set :user, 'foo'
set :domain, '114.***.***.***'
set :application, 'deepot'
set :repo_url, 'foo#114.***.***.***:~/git/deepot.git'
set :deploy_to, '/home/#{fetch(:user)}/local/#{fetch(:application)}'
set :linked_files, %w{config/database.yml config/bmap.yml config/cloudinary.yml config/environments/development.rb config/environments/production.rb}
Below is my production.rb...
role :app, %w{foo#114.***.***.***}
role :web, %w{foo#114.***.***.***}
role :db, %w{foo#114.***.***.***}
server '114.***.***.***', user: 'foo', roles: %w{web app}, ssh_options: {keys: %w{/c/Users/Administrator/.ssh/id_rsa}, auth_methods: %w(publickey)}
I can successfully ssh on to my foo#114.***.***.*** without entering any password using git bash. (I am windows 7 machine and deployment server is ubuntu 12.04)
Any help will be appreciated. Thanks!
Try generating a new private/public key par and provide a passphrase for it. If it works, the problem is your current key doesn't use a passphrase.

ubuntu rabbitmq - Error: unable to connect to node 'rabbit#somename: nodedown

I am using celery for django which needs rabbitmq. Some 4 or 5 months back, it used to work well. I again tried using it for a new project and got below error for rabbitmq while listing queues.
Listing queues ...
Error: unable to connect to node 'rabbit#somename': nodedown
diagnostics:
- nodes and their ports on 'somename': [{rabbitmqctl23014,44910}]
- current node: 'rabbitmqctl23014#somename'
- current node home dir: /var/lib/rabbitmq
- current node cookie hash: XfMxei3DuB8GOZUm1vdUsg==
Whats the solution? If there is no good solution, can I uninstall and reinstall rabbitmq ?
I had installed rabbit as a service apparently and the
sudo rabbitmqctl force_reset
command was not working.
sudo service rabbitmq-server restart
Did exactly what I need.
P.S. I made sure I was the root user to do the previous command
sudo su
if you need change hostname:
sudo aptitude remove rabbitmq-server
sudo rm -fr /var/lib/rabbitmq/
set new hostname:
hostname newhost
in file /etc/hostname set new value hostname
add to file /etc/hosts
127.0.0.1 newhost
install rabbitmq:
sudo aptitude install rabbitmq-server
done
Check if the server is running by using this command:
sudo service rabbitmq-server status
If it says
Status of all running nodes...
Node 'rabbit#ubuntu' with Pid 26995:
running done.
It's running.
In my case, I accidentally ran the rabbitmqctl command with a different user and got the error you mentioned.
You might have installed it with root, try running
sudo rabbitmqctl stop_app
and see what the response is.
(If everything's fine, run
sudo rabbitmqctl start_app
afterwards).
Double check that your cookie hash file is the same
Double check that your machine name (uname) is the same as the one stated in your configuration — this one can be tricky
And double check that you start rabbitmq with the same user as the one you installed it. Just using 'sudo' won't do the trick.