PM2 doesn't launch Loopback 4 app on DigitalOcean Ubuntu server - loopbackjs

I am trying to launch Loopback on Digital Ocean following this tutorial:
https://loopback.io/doc/en/lb4/deploying-with-pm2-and-nginx.html
The problem is that when I launch it with "pm2 start" command, it seems that pm2 starts, but it doesn't start the Loopback app. The logs don't show anything (screenshot - https://i.stack.imgur.com/MvaV2.png).
I double checked - the Loopback app is fine and it starts with "npm run start:local" successfully.
Here's my package.json commands:
"start:local": "node -r source-map-support/register .",
"start": "pm2 start ecosystem.config.js --env production",
"stop": "pm2 stop ecosystem.config.js --env production",
ecosystem.config.js:
module.exports = {
apps: [
{
name: 'BFF',
script: './dist/index.js',
instances: 1,
interpreter : 'node#10.20.1',
autorestart: true,
watch: true,
max_memory_restart: '1G',
env: {
NODE_ENV: 'development',
},
env_production: {
NODE_ENV: 'production',
},
},
],
};

Related

Getting error `repository does not exist or may require 'docker login': denied: requested access to the resource is denied` in Elastic Beanstalk

While deploying dotnet app as docker with Milticontainer option in Elasticbean stalk, Getting the error like
2021-05-20 01:26:55 ERROR ECS task stopped due to: Task failed to start. (traveltouchapi: CannotPullContainerError: Error response from daemon: pull access denied for traveltouchapi, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
postgres_image: )
2021-05-20 01:26:58 ERROR Failed to start ECS task after retrying 2 times.
2021-05-20 01:27:00 ERROR [Instance: i-0844a50e307bd8b23] Command failed on instance. Return code: 1 Output: .
Environment details for: TravelTouchApi-dev3
Application name: TravelTouchApi
Region: ap-south-1
Deployed Version: app-c1ba-210520_065320
Environment ID: e-i9t6f6vszk
Platform: arn:aws:elasticbeanstalk:ap-south-1::platform/Multi-container Docker running on 64bit Amazon Linux/2.26.0
Tier: WebServer-Standard-1.0
CNAME: TravelTouchApi-dev3.ap-south-1.elasticbeanstalk.com
Updated: 2021-05-20 01:23:27.384000+00:00
My Dockerfile is
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
# Install Node.js
RUN curl -fsSL https://deb.nodesource.com/setup_14.x | bash - \
&& apt-get install -y \
nodejs \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /src/TravelTouchApi
COPY ["TravelTouchApi.csproj", "./"]
RUN dotnet restore "TravelTouchApi.csproj"
COPY . .
WORKDIR "/src/TravelTouchApi"
RUN dotnet build "TravelTouchApi.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "TravelTouchApi.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "TravelTouchApi.dll"]
My docker-compose.yml is
version: '3.4'
networks:
traveltouchapi-dev:
driver: bridge
services:
traveltouchapi:
image: traveltouchapi:latest
depends_on:
- "postgres_image"
build:
context: .
dockerfile: Dockerfile
ports:
- "80:80"
environment:
DB_CONNECTION_STRING: "host=postgres_image;port=5432;database=blogdb;username=bloguser;password=bloguser"
networks:
- traveltouchapi-dev
postgres_image:
image: postgres:latest
ports:
- "5432"
restart: always
volumes:
- db_volume:/var/lib/postgresql/data
environment:
POSTGRES_USER: "bloguser"
POSTGRES_PASSWORD: "bloguser"
POSTGRES_DB: "blogdb"
networks:
- traveltouchapi-dev
volumes:
db_volume:
My Dockerrun.aws.json
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"environment": [
{
"name": "POSTGRES_USER",
"value": "bloguser"
},
{
"name": "POSTGRES_PASSWORD",
"value": "bloguser"
},
{
"name": "POSTGRES_DB",
"value": "blogdb"
}
],
"essential": true,
"image": "postgres:latest",
"memory": 200,
"mountPoints": [
{
"containerPath": "/var/lib/postgresql/data",
"sourceVolume": "Db_Volume"
}
],
"name": "postgres_image",
"portMappings": [
{
"containerPort": 5432
}
]
},
{
"environment": [
{
"name": "DB_CONNECTION_STRING",
"value": "host=postgres_image;port=5432;database=blogdb;username=bloguser;password=bloguser"
}
],
"essential": true,
"image": "traveltouchapi:latest",
"name": "traveltouchapi",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 200
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "db_volume"
},
"name": "Db_Volume"
}
]
}
I think you are missing the login step before deploy the applications.
Can you try use this command before deploying?
aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_DEFAULT_ACCID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
The image name must contains with full repo/tag name 'natheesh/traveltouchapi: latest' in Dockerrun.json

how to start an eventstore in Gitlab-CI test

I have the following code in my gitlab repo
package.json
{
...
"scripts": {
"test": "mocha --require ts-node/register --watch-extensions ts,tsx \"src/**/*.{spec,test}.{ts,tsx}\""
}
...
}
.gitlab-ci.yml
stages:
- test
test:
image: node:8
stage: test
script:
- npm install
- npm run test
test.ts
import { exec } from 'child_process';
import { promisify } from 'util';
const Exec = promisify(exec);
describe(test, async () => {
before(async () => {
// next line doesn't work in GitLab-CI
await Exec(`docker run -d --rm -p 1113:1113 -p 2113:2113 eventstore/eventstore`);
// an so on
})
});
it work well when I run "npm run test" in my local machine.
My Question is how can I run this test in Gitlab-CI?
If you try to run tests that connect to eventstore docker you can use gitlab services:
GitLab CI uses the services keyword to define what docker containers
should be linked with your base image.
first, you will need to setup docker executor
then you will be able to use eventstore as a service. here is an example with postgres. more information here.
Example:
test_server:
tags:
- docker
services:
- eventstore:latest
script:
- npm install && npm run test
Edit:
To access the service:
The default aliases for the service’s hostname are created from its image name
Or use an alias:
services:
- name: mysql:latest
alias: mysql-1

Copy local file to remote AWS EC2 instance with Ansible

I'm attempting to build an AWS AMI using both Packer and Ansible to provision my AMI. I'm getting stuck on being able to copy some local files to my newly spun up EC2 instance using Ansible. I'm using the copy module in Ansible to do this. Here's what my Ansible code looks like:
- name: Testing copy of the local remote file
copy:
src: /tmp/test.test
dest: /tmp
Here's the error I get:
amazon-ebs: TASK [Testing copy of the local remote file] ***********************************
amazon-ebs: fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to find '/tmp/test.test' in expected paths."}
I've verified that the file /tmp/test.test exists on my local machine from which Ansible is running.
For my host file I just have localhost in it since packer is telling Ansible everything it needs to know about where to run Ansible commands.
I'm not sure where to go from here or how to properly debug this error, so I'm hoping for a little help.
Here's what my Packer script looks like:
{
"variables": {
"aws_access_key": "{{env `access_key`}}",
"aws_secret_key": "{{env `secret_key`}}"
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-east-1",
"source_ami": "ami-116d857a",
"instance_type": "t2.micro",
"ssh_username": "admin",
"ami_name": "generic_jenkins_image",
"ami_description": "Testing AMI building with Packer",
"vpc_id": "xxxxxxxx",
"subnet_id": "xxxxxxxx",
"associate_public_ip_address": "true",
"tags": {"Environment" : "Dev", "Product": "SharedOperations"}
}],
"provisioners": [
{
"type": "shell",
"inline": [
"sleep 30",
"sudo rm -f /var/lib/dpkg/lock",
"sudo apt-get update -y --fix-missing",
"sudo apt-get -y install libpq-dev python-dev libxml2-dev libxslt1-dev libldap2-dev libsasl2-dev libffi-dev gcc build-essential python-pip",
"sudo pip install ansible"
]
},
{
"type": "ansible-local",
"playbook_file": "ansible/main.yml"
}
]
}
And here's my entire Ansible file:
---
- hosts: all
sudo: yes
tasks:
- name: Testing copy of the local remote file
copy:
src: /tmp/test.test
dest: /tmp
You are using ansible-local provisioner which runs the playbooks directly on targets ("local" in HashiCorp's products like Vagrant, Packet is used to describe the point of view of the provisioned machine).
The target does not have the /tmp/test.test file, hence you get the error.
You actually want to run the playbook using the regular Ansible provisioner.

Rails app deployment with AZK fail on Digital Ocean

I'm trying to push a very simple rails app on a DigitalOcean droplet. Unfortunatly, i'm enable to continue the deployment : i get stuck with a very elusive error message :
info: [agent] get agent status
info: [agent] agent is running: true
info: [agent] get agent status
info: [agent] agent is running: true
info: [agent] get agent status
info: [agent] agent is running: true
info: [agent] get agent status
info: [agent] agent is running: true
info: Connecting to http://192.168.50.4:2375
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
fatal: [default]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "/bin/sh: 1: /usr/bin/python: not found\r\n", "msg": "MODULE FAILURE", "parsed": false}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #playbooks/setup.retry
PLAY RECAP *********************************************************************
default : ok=0 changed=0 unreachable=0 failed=1
Am i the only one who ever encounter this problem ?
Here is my Azkfile too :
/**
* Documentation: http://docs.azk.io/Azkfile.js
*/
// Adds the systems that shape your system
systems({
'apptelier-website': {
// Dependent systems
depends: [],
// More images: http://images.azk.io
image: {docker: 'azukiapp/ruby:2.3.0'},
// Steps to execute before running instances
provision: [
"bundle install --path /azk/bundler"
],
workdir: "/azk/#{manifest.dir}",
shell: "/bin/bash",
command: ["bundle", "exec", "rackup", "config.ru", "--pid", "/tmp/ruby.pid", "--port", "$HTTP_PORT", "--host", "0.0.0.0"],
wait: 20,
mounts: {
'/azk/#{manifest.dir}': sync("."),
'/azk/bundler': persistent("./bundler"),
'/azk/#{manifest.dir}/tmp': persistent("./tmp"),
'/azk/#{manifest.dir}/log': path("./log"),
'/azk/#{manifest.dir}/.bundle': path("./.bundle")
},
scalable: {"default": 1},
http: {
domains: [
'#{env.HOST_DOMAIN}', // used if deployed
'#{env.HOST_IP}', // used if deployed
'#{system.name}.#{azk.default_domain}' // default azk domain
]
},
ports: {
// exports global variables
http: "3000/tcp"
},
envs: {
// Make sure that the PORT value is the same as the one
// in ports/http below, and that it's also the same
// if you're setting it in a .env file
RUBY_ENV: "production",
RAILS_ENV: "production",
RACK_ENV: 'production',
WORKER_RETRY: 1,
BUNDLE_APP_CONFIG: '/azk/bundler',
APP_URL: '#{system.name}.#{azk.default_domain}'
}
},
deploy: {
image: {docker: 'azukiapp/deploy-digitalocean'},
mounts: {
'/azk/deploy/src': path('.'),
'/azk/deploy/.ssh': path('#{env.HOME}/.ssh'), // Required to connect with the remote server
'/azk/deploy/.config': persistent('deploy-config')
},
// This is not a server. Just run it with `azk deploy`
scalable: {default: 0, limit: 0},
envs: {
GIT_REF: 'master',
AZK_RESTART_COMMAND: 'azk restart -Rvv',
BOX_SIZE: '512mb'
}
}
});
Thanks for the help !
Edouard, nice to meet you. I'm from azk core team and I'm not sure if you're using the latest deployment image.
Please follow these steps to update it:
adocker pull azukiapp/deploy-digitalocean:0.0.7;
Edit the deploy system in your Azkfile and add the tag 0.0.7 to the used deployment image to ensure we're using the latest one. It should be like: image: {docker: 'azukiapp/deploy-digitalocean:0.0.7'};
Next, be sure you have the env DEPLOY_API_TOKEN set in your .env file. If you don't have it set yet, take a look on the Step 7 of the article we've published on DigitalOcean Community Tutorials: https://www.digitalocean.com/community/tutorials/how-to-deploy-a-rails-app-with-azk#step-7-%E2%80%94-obtaining-a-digitalocean-api-token
Finally, re-run the deploy command:
azk deploy clear-cache;
azk deploy
Please let me know if this is enough to solve your problem.

Go & Docker: I'm able to run a go web server when using stdlib, when I use custom packages errors occur

Note the code works perfectly fine when I'm running the code on my laptop.
The following two groups of code will run on my laptop. However the second group (which uses my custom package) doesn't work on Elastic Beanstalk running docker.
Standard Lib only
import (
"net/http"
"os"
)
func main() {
port := os.Getenv("PORT")
if port == "" {
port = "3000"
}
http.ListenAndServe(":"+port, nil)
}
Uses Custom Package
import (
"os"
"github.com/sim/handlers"
)
func main() {
port := os.Getenv("PORT")
if port == "" {
port = "3000"
}
handlers.ServeAndHandle(port) // wrapper of ListenAndServe
}
Error Messages:
Failed to build Docker image aws_beanstalk/staging-app: andlers: exit status 128 [0mtime="2015-08-14T05:08:17Z" level="info" msg="The command [/bin/sh -c go-wrapper download] returned a non-zero code: 1" . Check snapshot logs for details.
2015-08-14 01:08:15 UTC-0400 WARN Failed to build Docker image aws_beanstalk/staging-app, retrying...
cron.yaml
version: 1
cron:
- name: "task1"
url: "/scheduled"
schedule: "* * * * *"
You need a Dockerfile or/and Dockerrun.aws.json for your environment as per the documentation
Dockerfile
FROM FROM golang:1.3-onbuild
EXPOSE 3000
CMD ["go run <your.go.file>"]
Dockerrun.aws.json
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "golang:1.3-onbuild", # <-- don't need this if you are using a Dockerfile
"Update": "true"
},
"Ports": [
{
"ContainerPort": "3000"
}
],
"Logging": "/var/log/go"
}
Using the eb command line to deploy ?