Rails app deployment with AZK fail on Digital Ocean - ruby-on-rails-4

I'm trying to push a very simple rails app on a DigitalOcean droplet. Unfortunatly, i'm enable to continue the deployment : i get stuck with a very elusive error message :
info: [agent] get agent status
info: [agent] agent is running: true
info: [agent] get agent status
info: [agent] agent is running: true
info: [agent] get agent status
info: [agent] agent is running: true
info: [agent] get agent status
info: [agent] agent is running: true
info: Connecting to http://192.168.50.4:2375
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
fatal: [default]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "/bin/sh: 1: /usr/bin/python: not found\r\n", "msg": "MODULE FAILURE", "parsed": false}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #playbooks/setup.retry
PLAY RECAP *********************************************************************
default : ok=0 changed=0 unreachable=0 failed=1
Am i the only one who ever encounter this problem ?
Here is my Azkfile too :
/**
* Documentation: http://docs.azk.io/Azkfile.js
*/
// Adds the systems that shape your system
systems({
'apptelier-website': {
// Dependent systems
depends: [],
// More images: http://images.azk.io
image: {docker: 'azukiapp/ruby:2.3.0'},
// Steps to execute before running instances
provision: [
"bundle install --path /azk/bundler"
],
workdir: "/azk/#{manifest.dir}",
shell: "/bin/bash",
command: ["bundle", "exec", "rackup", "config.ru", "--pid", "/tmp/ruby.pid", "--port", "$HTTP_PORT", "--host", "0.0.0.0"],
wait: 20,
mounts: {
'/azk/#{manifest.dir}': sync("."),
'/azk/bundler': persistent("./bundler"),
'/azk/#{manifest.dir}/tmp': persistent("./tmp"),
'/azk/#{manifest.dir}/log': path("./log"),
'/azk/#{manifest.dir}/.bundle': path("./.bundle")
},
scalable: {"default": 1},
http: {
domains: [
'#{env.HOST_DOMAIN}', // used if deployed
'#{env.HOST_IP}', // used if deployed
'#{system.name}.#{azk.default_domain}' // default azk domain
]
},
ports: {
// exports global variables
http: "3000/tcp"
},
envs: {
// Make sure that the PORT value is the same as the one
// in ports/http below, and that it's also the same
// if you're setting it in a .env file
RUBY_ENV: "production",
RAILS_ENV: "production",
RACK_ENV: 'production',
WORKER_RETRY: 1,
BUNDLE_APP_CONFIG: '/azk/bundler',
APP_URL: '#{system.name}.#{azk.default_domain}'
}
},
deploy: {
image: {docker: 'azukiapp/deploy-digitalocean'},
mounts: {
'/azk/deploy/src': path('.'),
'/azk/deploy/.ssh': path('#{env.HOME}/.ssh'), // Required to connect with the remote server
'/azk/deploy/.config': persistent('deploy-config')
},
// This is not a server. Just run it with `azk deploy`
scalable: {default: 0, limit: 0},
envs: {
GIT_REF: 'master',
AZK_RESTART_COMMAND: 'azk restart -Rvv',
BOX_SIZE: '512mb'
}
}
});
Thanks for the help !

Edouard, nice to meet you. I'm from azk core team and I'm not sure if you're using the latest deployment image.
Please follow these steps to update it:
adocker pull azukiapp/deploy-digitalocean:0.0.7;
Edit the deploy system in your Azkfile and add the tag 0.0.7 to the used deployment image to ensure we're using the latest one. It should be like: image: {docker: 'azukiapp/deploy-digitalocean:0.0.7'};
Next, be sure you have the env DEPLOY_API_TOKEN set in your .env file. If you don't have it set yet, take a look on the Step 7 of the article we've published on DigitalOcean Community Tutorials: https://www.digitalocean.com/community/tutorials/how-to-deploy-a-rails-app-with-azk#step-7-%E2%80%94-obtaining-a-digitalocean-api-token
Finally, re-run the deploy command:
azk deploy clear-cache;
azk deploy
Please let me know if this is enough to solve your problem.

Related

PM2 doesn't launch Loopback 4 app on DigitalOcean Ubuntu server

I am trying to launch Loopback on Digital Ocean following this tutorial:
https://loopback.io/doc/en/lb4/deploying-with-pm2-and-nginx.html
The problem is that when I launch it with "pm2 start" command, it seems that pm2 starts, but it doesn't start the Loopback app. The logs don't show anything (screenshot - https://i.stack.imgur.com/MvaV2.png).
I double checked - the Loopback app is fine and it starts with "npm run start:local" successfully.
Here's my package.json commands:
"start:local": "node -r source-map-support/register .",
"start": "pm2 start ecosystem.config.js --env production",
"stop": "pm2 stop ecosystem.config.js --env production",
ecosystem.config.js:
module.exports = {
apps: [
{
name: 'BFF',
script: './dist/index.js',
instances: 1,
interpreter : 'node#10.20.1',
autorestart: true,
watch: true,
max_memory_restart: '1G',
env: {
NODE_ENV: 'development',
},
env_production: {
NODE_ENV: 'production',
},
},
],
};

Error in Ansible Playbook where Cloudwatch Agent Status is being checked

can you help me? This is my ansible script:
---
- hosts: "{{host_list}}"
remote_user: root
gather_facts: true
tasks:
- name: Check if Cloudwatch Agent is Installed Already
command: service status amazon-cloudwatch-agent
register: init_status_result
ignore_errors: yes
- debug:
var: init_status_result.stderr
verbosity: 4
- name: Create Directory for Downloading Cloudwatch Agent zip
file:
path: /opt/aws/amazon-cloudwatch-zip
state: directory
owner: root
group: root
mode: '0755'
recurse: no
when: init_status_result.stderr is search ("For other actions, please try to use systemctl")
I have this error when attempting to run my playbook (I just want a way really to run through the playbook if the status check of the cloudwatch agent service is not found.):
user1#ansible01-infra-mgnt:~/.ansible/playbooks/cw_agent$ ansible-playbook -K -i /home/user1/.ansible/etc/hosts --extra-vars="host_list=11.22.33.44" install_cw_agent.yml
SUDO password:
PLAY [11.22.33.44] **************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************************************************************
ok: [11.22.33.44]
TASK [Check if Cloudwatch Agent is Installed Already] ***************************************************************************************************************************************************************************************
[WARNING]: Consider using the service module rather than running service. If you need to use command because service is insufficient you can add warn=False to this command task or set command_warnings=False in ansible.cfg to get rid
of this message.
fatal: [11.22.33.44]: FAILED! => {"changed": false, "cmd": "service status amazon-cloudwatch-agent", "msg": "[Errno 2] No such file or directory", "rc": 2}
...ignoring
TASK [debug] ********************************************************************************************************************************************************************************************************************************
skipping: [11.22.33.44]
TASK [Create Directory for Downloading Cloudwatch Agent zip] ********************************************************************************************************************************************************************************
fatal: [11.22.33.44]: FAILED! => {"msg": "The conditional check 'init_status_result.stderr is search (\"For other actions, please try to use systemctl\")' failed. The error was: Unexpected templating type error occurred on ({% if init_status_result.stderr is search (\"For other actions, please try to use systemctl\") %} True {% else %} False {% endif %}): expected string or buffer\n\nThe error appears to have been in '/home/user1/.ansible/playbooks/cw_agent/install_cw_agent.yml': line 15, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Create Directory for Downloading Cloudwatch Agent zip\n ^ here\n"}
to retry, use: --limit #/home/user1/.ansible/playbooks/cw_agent/install_cw_agent.retry
PLAY RECAP **********************************************************************************************************************************************************************************************************************************
11.22.33.44 : ok=2 changed=0 unreachable=0 failed=1 ```
Try shell module to check the service status instead of command module .
shell: “service status service_name”

migration run out of gas

Hi I am using Geth and i try to truffle migrate but it gives error.
truffle-config.js is belown:
development: {
host: "127.0.0.1", // Localhost (default: none)
port: 8545, // Standard Ethereum port (default: none)
network_id: "4", //rinkeby id
from:"my address",
gas: 1000
}
When I do truffle migrate using command -truffle migrate, I get this error.
Error: Error: Error: *** Deployment Failed ***
"Migrations" ran out of gas (using a value you set in your network
config or deployment parameters.)
* Block limit: 0x50e7c
* Gas sent: 1000
at Object.run (/usr/local/lib/node_modules/truffle/build/webpack:/packages/truffle-migrate/index.js:92:1)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
Can you help me please?
I solved the error by adding this code to the truffle config file.
compilers: {
solc: {
version: "0.5.16",
settings: {
optimizer: {
enabled: true, // Default: false
runs: 1000, // Default: 200
},
},
},
},
It's exactly what the error says.
ran out of gas (using a value you set in your network
config or deployment parameters.)
gas: 1000 is not enough to deploy your contract

Meteor deploy error (mup): pushing meteor app bundle to server failed

I am trying to deploy a meteor app to an AWS server, but am getting this message:
Started TaskList: Configuring App
[52.41.84.125] - Pushing the Startup Script
nodemiral:sess:52.41.84.125 copy file - src: /
Users/Olivia/.nvm/versions/node/v7.8.0/lib/node_modules/mup/lib/modules/meteor/assets/templates/start.sh, dest: /opt/CanDu/config/start.sh, vars: {"appName":"CanDu","useLocalMongo":0,"port":80,"bind":"0.0.0.0","logConfig":{"opts":{"max-size":"100m","max-file":10}},"docker":{"image":"abernix/meteord:base","imageFrontendServer":"meteorhacks/mup-frontend-server","imagePort":80},"nginxClientUploadLimit":"10M"} +0ms
[52.41.84.125] x Pushing the Startup Script: FAILED Failure
Previously I had been able to deploy using mup, but now I am getting this message. The only major thing I've changed is the Python path in my .noderc. I am also able to SSH into my amazon server directly from the terminal. My mup file is:
module.exports = {
servers: {
one: {
host: '##.##.##.###',
username: 'ec2-user',
pem: '/Users/Olivia/.ssh/oz-pair.pem'
// password:
// or leave blank for authenticate from ssh-agent
}}meteor: {
name: 'CanDu',
path: '/Users/Olivia/repos/bene_candu_v2',
servers: {
one: {}
},
buildOptions: {
serverOnly: true,
mobileSettings: {
public: {
"astronomer": {
"appId": "<key>",
"disableUserTracking": false,
"disableRouteTracking": false,
"disableMethodTracking": false
},
"googleMaps": "<key>",
"facebook":{
"permissions":["email","public_profile","user_friends"]
}
},
},
},
env: {
ROOT_URL: 'http://ec2-##-##-##-###.us-west-2.compute.amazonaws.com',
MONGO_URL: 'mongodb://. . . "
},
/*ssl: {
crt: '/opt/keys/server.crt', // this is a bundle of certificates
key: '/opt/keys/server.key', // this is the private key of the certificate
port: 443, // 443 is the default value and it's the standard HTTPS port
upload: false
},*/
docker: {
image: 'abernix/meteord:base'
},
deployCheckWaitTime: 60
}
};
And I have checked to make sure there are no trailing commas, and have tried increasing the wait time. etc. The error message I'm getting is pretty unhelpful. Does anyone have any insight? Thank you so much!

No Ports Assigned to Strongloop App

I am attempting to deploy a strongloop app to a Digitalocean remote box running Strongloop Process Manager. I have gotten as far as successfully running the deploy command as follows:
USER ~/projects/loopback/places-api $ slc deploy http://IPADDRESS deploy
Counting objects: 5215, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (4781/4781), done.
Writing objects: 100% (5215/5215), 7.06 MiB | 4.27 MiB/s, done.
Total 5215 (delta 1130), reused 0 (delta 0)
To http://104.131.66.124:8701/api/services/1/deploy/default
* [new branch] deploy -> deploy
Deployed `deploy` as `placesAPI` to `http://IPADDRESS:8701/`
Next, I check the status of my Strongloop app by running the following command:
slc ctl -C http://IPADDRESS:8701
Service ID: 1
Service Name: placesAPI
Environment variables:
No environment variables defined
Instances:
Version Agent version Debugger version Cluster size Driver metadata
5.1.0 2.0.2 n/a 1 N/A
Processes:
ID PID WID Listening Ports Tracking objects? CPU profiling? Tracing? Debugging?
1.1.1050 1050 0
1.1.2065 2065 49
At this point, I am not able to access my app by visiting IPADDRESS:3001 as the Strongloop documentation would suggest and there are no processes listed in the above app status running on port 3001 as would be expected according to the Strongloop documentation.
Comparing my app status to the app status at this state of deployment shown in the Strongloop documentation, It appears I should have some processes listening to port 3001 which are not running in my app.
Here is the app status shown in the Strongloop documentation:
$ slc ctl -C http://prod.foo.com:7777
Service ID: 1
Service Name: appone
Environment variables:
No environment variables defined
Instances:
Version Agent version Cluster size
4.0.30 1.4.15 4
Processes:
ID PID WID Listening Ports Tracking objects? CPU profiling?
1.1.22555 22555 0
1.1.22741 22741 5 prod.foo.com:3001
1.1.22748 22748 6 prod.foo.com:3001
1.1.22773 22773 7 prod.foo.com:3001
1.1.22793 22793 8 prod.foo.com:3001
Notice the additional processes listening to port 3001.
My question is: how do I get my strongloop app to run and listen to these ports?
If it helps here are my package.json and config.json files:
:::::::::::::::::::::::package.json::::::::::::::::
{
"name": "placesAPI",
"version": "1.0.0",
"main": "server/server.js",
"scripts": {
"start": "node .",
"pretest": "jshint ."
},
"dependencies": {
"body-parser": "^1.9.0",
"compression": "^1.0.3",
"connect-ensure-login": "^0.1.1",
"cookie-parser": "^1.3.2",
"cors": "^2.5.2",
"errorhandler": "^1.1.1",
"express-flash": "0.0.2",
"express-session": "^1.7.6",
"jade": "^1.7.0",
"loopback": "^2.22.0",
"loopback-boot": "^2.6.5",
"loopback-component-explorer": "^2.1.0",
"loopback-component-passport": "^1.5.0",
"loopback-connector-postgresql": "^2.4.0",
"loopback-datasource-juggler": "^2.39.0",
"passport": "^0.3.2",
"passport-facebook": "^1.0.3",
"passport-google-oauth": "^0.2.0",
"passport-local": "^1.0.0",
"passport-oauth2": "^1.1.2",
"passport-twitter": "^1.0.3",
"serve-favicon": "^2.0.1"
},
"devDependencies": {
"jshint": "^2.5.6"
},
"repository": {
"type": "",
"url": ""
},
"description": "placesAPI",
"bundleDependencies": [
"body-parser",
"compression",
"connect-ensure-login",
"cookie-parser",
"cors",
"errorhandler",
"express-flash",
"express-session",
"jade",
"loopback",
"loopback-boot",
"loopback-component-explorer",
"loopback-component-passport",
"loopback-connector-postgresql",
"loopback-datasource-juggler",
"passport",
"passport-facebook",
"passport-oauth2",
"serve-favicon"
]
}
:::::::::::::::::::::::config.json::::::::::::::::
{
"restApiRoot": "/api",
"host": "0.0.0.0",
"port": 3000,
"cookieSecret": "REDACTED",
"remoting": {
"context": {
"enableHttpContext": false
},
"rest": {
"normalizeHttpPath": false,
"xml": false
},
"json": {
"strict": false,
"limit": "100kb"
},
"urlencoded": {
"extended": true,
"limit": "100kb"
},
"cors": false,
"errorHandler": {
"disableStackTrace": false
}
},
"legacyExplorer": false
}
There is also this error in the logs from log-dump:
2015-12-23T22:13:35.876Z pid:2720 worker:84 events.js:142
2015-12-23T22:13:35.882Z pid:2720 worker:84 throw er; // Unhandled 'error' event
2015-12-23T22:13:35.882Z pid:2720 worker:84 ^
2015-12-23T22:13:35.882Z pid:2720 worker:84 Error: connect ECONNREFUSED 127.0.0.1:5432
2015-12-23T22:13:35.882Z pid:2720 worker:84 at Object.exports._errnoException (util.js:856:11)
2015-12-23T22:13:35.882Z pid:2720 worker:84 at exports._exceptionWithHostPort (util.js:879:20)
2015-12-23T22:13:35.883Z pid:2720 worker:84 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1064:14)
2015-12-23T22:13:35.919Z pid:1106 worker:0 ERROR supervisor worker id 84 (pid 2720) accidental exit with 1
2015-12-23T22:13:38.253Z pid:1106 worker:0 INFO supervisor started worker 85 (pid 2738)
2015-12-23T22:13:38.253Z pid:1106 worker:0 INFO supervisor resized to 1
2015-12-23T22:13:39.858Z pid:2738 worker:85 INFO strong-agent native addon missing, install a compiler
2015-12-23T22:13:39.859Z pid:2738 worker:85 INFO strong-agent v2.0.2 profiling app 'placesAPI' pid '2738'
2015-12-23T22:13:39.890Z pid:2738 worker:85 INFO strong-agent[2738] started profiling agent
2015-12-23T22:13:44.943Z pid:2738 worker:85 INFO strong-agent not profiling, agent metrics requires a valid license.
2015-12-23T22:13:44.944Z pid:2738 worker:85 Please contact sales#strongloop.com for assistance.
2015-12-23T22:13:44.992Z pid:2738 worker:85 Web server listening at: http://0.0.0.0:3001
2015-12-23T22:13:44.997Z pid:2738 worker:85 Browse your REST API at http://0.0.0.0:3001/explorer
2015-12-23T22:13:45.103Z pid:2738 worker:85 Connection fails: { [Error: connect ECONNREFUSED 127.0.0.1:5432]
2015-12-23T22:13:45.104Z pid:2738 worker:85 code: 'ECONNREFUSED',
2015-12-23T22:13:45.104Z pid:2738 worker:85 errno: 'ECONNREFUSED',
2015-12-23T22:13:45.104Z pid:2738 worker:85 syscall: 'connect',
2015-12-23T22:13:45.104Z pid:2738 worker:85 address: '127.0.0.1',
2015-12-23T22:13:45.104Z pid:2738 worker:85 port: 5432 }
2015-12-23T22:13:45.104Z pid:2738 worker:85 It will be retried for the next request.
Just to put this in an answer, from the package.json we can see that you have included the loopback-connector-postgresql and in the log we see an attempted connection to port 5432 which is the default for that DBMS. It's trying to connect on the localhost (127.0.0.1) and my guess is that Postgres is either not installed on your Digital Ocean box, or not running. You'll need to update the config for your DB, or install (and run) the DB on your DO droplet.
If you have different configs for dev vs production then you can set up an environment-specific datasources config file: datasources.production.json for example. In that file you would put your prod config, and in datasources.json you would have your dev (local) config. When using this method, be sure to set the NODE_ENV variable on your DO droplet to production (to match the name of the prod datasources config file).