Stop detached strongloop application - loopbackjs

I installed loopback on my server (ubuntu) and then created an app and use the command slc run to run... everything works as expected.
Now i have 1 question and also 1 issue i am facing with:
The question: i need to use slc run command but to keep the app "alive" also after i close the terminal. For that i used the --detach option and it works, What i wanted to know if the --detach option is the best practice or i need to do it in a different way.
The issue: After i use the --detach i don't really know how to stop it. Is there a command that i can use to stop the process from running?

To stop a --detached process, go to the same directory it was run from and do slc runctl stop. There are a number of runctl commands, but stop is probably the one you are most interested in.
Best practices is a longer answer. The short version is: don't use --detach ever and do use an init script to run your app and keep it running (probably Upstart, since you're on Ubuntu).
Using slc run
If you wan to run slc run as an Upstart job you can install strong-service-install with npm install -g strong-service-install. This will give you sl-svc-install, a utility for creating Upstart and systemd services.
You'll end up running something like sudo sl-svc-install --name my-app --user youruser --cwd /path/to/app/root -- slc run . which should create a Upstart job named my-app which will run your app as your uid from the app's root. Your app's stdout/stderr will be sent to /var/log/upstart/my-app.log. If you are using a version of Ubuntu older than 12.04 you'll need to specify --upstart 0.6 and your logs will end up going to syslog instead.
Using slc pm
Another, possibly easier route, is to use slc pm, which operates at a level above slc run and happens to be easier to install as an OS service. For this route you already have everything installed. Run sudo slc pm-install and a strong-pm Upstart service will be installed as well as a strong-pm user to run it as with a $HOME of /var/lib/strong-pm.
Where the PM approach gets slightly more complicated is that you have to deploy your app to it. Most likely this is just a matter of going to your app root and running slc deploy http://localhost:8701/, but the specifics will depend on your app. You can configure environment variables for your app, deploy new versions, and your logs will show up in /var/log/upstart/strong-pm.log.
General Best Practices
For either of the options above, I recommend not doing npm install -g strongloop on your server since it includes things like yeoman generators and other tools that are more useful on a workstation than a server.
If you want to go the slc run route, you would do npm install -g strong-supervisor strong-service-install and replace your slc run with sl-run.
If you want to go the slc pm route, you would do npm install -g strong-pm and replace slc pm-install with sl-pm-install.
Disclaimer
I work at StrongLoop and primarily work on these tools.

View the status of running apps using:
slc ctl status
Example output:
Service ID: 1
Service Name: app
Environment variables:
No environment variables defined
Instances:
Version Agent version Debugger version Cluster size Driver metadata
5.2.1 2.0.3 n/a 1 N/A
Processes:
ID PID WID Listening Ports Tracking objects? CPU profiling? Tracing? Debugging?
1.1.2708 2708 0
1.1.5836 5836 1 0.0.0.0:3001
Service ID: 2
Service Name: default
Environment variables:
No environment variables defined
Instances:
Version Agent version Debugger version Cluster size Driver metadata
5.2.1 2.0.3 n/a 1 N/A
Processes:
ID PID WID Listening Ports Tracking objects? CPU profiling? Tracing? Debugging?
2.1.2760 2760 0
2.1.1676 1676 1 0.0.0.0:3002
To kill the first app, use slc ctrl stop
slc ctl stop app
Service "app" hard stopped

What if i have to run the application as a cluster ? Can i still do it via the upstart created.
Like
sudo sl-svc-install --name my-app --user youruser --cwd /path/to/app/root -- slc run --cluster 4 .
I tried doing this but the /etc/init/my-app.conf does not show any information about the cluster.

Related

docker context create ecs myecs - requires exactly one argument

I'm trying to create a Docker context that will automatically integrate with AWS's ECS.
I'm following this tutorial
The author just does:
docker context create ecs myecs and gets a "pick an integration" prompt, whereas I get an error saying it needs exactly 1 argument.
docker context create" requires exactly 1 argument.
See 'docker context create --help'.
Usage: docker context create [OPTIONS] CONTEXT
Create a context
You need to install the Docker Compose CLI preview
The below curl is from here: Docker docs
curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh
sudo docker context create ecs myecs
It didn't work without sudo for me for some reason.
After the script finished I had some weird errors:
cp: cannot stat '/tmp/tmp.d4QjhW8T6k/docker-compose': No such file or directory and docker context create ecs myecs didn't work at first, but once I tried with sudo it worked fine.
EDIT: . ~/.zshrc (or just close your terminal and open a new one) made it possible for me to run docker context create ecs myecs without sudo.
Author of the blog/tutorial here. It looks like you don't have the pre-requsite installed. In the blog I call out the pre-req in pieces like this.
....In July, Docker released a beta for Docker Desktop that embedded these functionalities and, on September 15th, Docker released an updated experience in their Docker Desktop stable channel....
and then
...For now the only thing you need is Docker Desktop and an AWS account. For this test , I am using Docker Desktop (stable) version 2.5.0.1....
and finally
The core of this integration is built around a new tool dubbed Compose CLI (this is not to be confused with the original docker-compose CLI). This new CLI surfaces to the user as new functionalities in the docker command. While in Docker Desktop all this plumbing is completely hidden and available out of the box, if you are using a Linux machine you can set it up using either a script or a manual install. This new CLI is, essentially, a new version of the docker binary.
Eager to understand more how we could make it more clear / front and center that there were stuff to install and/or minimum software versions you had to use.
Thanks for trying it out!
If you're on Linux and you're running the docker context create ecs myecscontext command from the docs then try enabling experimental features in docker:
Edit /etc/docker/daemon.json
Set contents to
{
"experimental": true
}
Restart docker service sudo systemctl restart docker
Exit your terminal and open a new one so that the changes take effect.
Source1
Source2
I had same issue but after installing Docker Desktop version problem resolved.
Server side version doesn't have such kind of functionality.

systemctl start service in Dockerfile script

I am trying to build the docker image using CentOS 7.7 as my base docker image which is systemd image.
Now, my requirement is like this: install the first RPM which starts the systemctl start my-process and this process required to be started in order to install my second RPM. But since Dockerfile not able to start the process using systemctl, i am not able to install any RPM correctly. I am getting following error:
Failed to get D-Bus connection: Operation not permitted
The "systemctl" client tool does not do much. It looks for the socket to contact the systemd daemon running on PID 1, i.e. the program you have been running from ENTRYPOINT. If you have removed the systemd service then you will get an error like that.
If you want to use a container like a virtual machine then it may be better run a different service-manager on PID-1. An example would be the docker-systemctl-replacement service.
It has served me well to bring applications into containers which were not really meant for that.

Unable to bring up docker project

I'm following this Docker tutorial, which creates a simple Docker-managed Django site, and when I try to run docker-compose up to launch my docker project, I get the ambiguous error:
ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?
The error suggests that the Docker daemon isn't running, but service docker status shows the Docker daemon is running.
If instead I run sudo docker-compose up, then it succeeds, but it chowns a lot of my local development files to the root user, which is easy enough to fix, but annoying.
Why does Docker require root access just to start a local Django development server? How do I fix this?
My versions:
Docker version 18.06.1-ce, build e68fc7a
docker-compose version 1.11.1, build 7c5d5e4
Ubuntu 16.04.5 LTS
If you can run any Docker command at all, you can trivially root the host:
docker run --rm -v /:/host busybox \
cat /host/etc/shadow
Additionally, Docker containers frequently run as root within their own container space, which means that whatever parts of the host filesystem you choose to expose into them, they can make arbitrary changes as arbitrary user IDs. You can use a docker run -u option to pick a different user ID, but you can pick any user ID, even one that belongs to another user on a shared system.
It is very reasonable to use sudo as a way to get root privileges for things that need it, and this is a typical out-of-the-box Docker configuration.
At the end of the day the only real gate on this is the Unix permissions on the file /var/run/docker.sock. This is often mode 0660 owned by a dedicated docker group. If you don’t mind your normal user being able to read and write arbitrary host files without much of a control at all, you can add yourself to that group. That’s frequently appropriate for something like a developer laptop; but on anything like a production system it deserves some real consideration of its security implications.

Google Material Design Components on Ubuntu Server on Google Cloud

I cannot get Material Design Components to run on my virtual server. I have tried following their "quick start" page and their Material basics (Web 101) course to no avail. I am able to execute most of the steps in either tutorial, but I cannot see the JavaScript apply to the page. What am I doing wrong? I will detail my process below so that someone can hopefully spot my mistake.
First I create a VM instance on the Google Cloud Platform. It is a Ubuntu 18.04 LTS image with 1 CPU, 3.75 GB memory, and HTTP/HTTPS traffic allowed on the firewall.
Then I install Node.js and NPM on the machine.
sudo apt-get update
sudo apt-get install nodejs
sudo apt-get install npm
Then I clone the codelab from GitHub. (following Web 101 in this example)
git clone https://github.com/material-components/material-components-web-codelabs
...and navigate to the pertinent directory.
cd material-components-web-codelabs/mdc-101/starter
In that directory, I install NPM.
npm install
The install works just fine, save for one optional dependency called "/chokidar/fsevents" which is apparently for Mac OS X anyways.
From the same directory, I start NPM.
npm start
At this point, the tutorial says I should be able to reach the site. It says to navigate to http://localhost:8080/. Since I am installing this on a remote, cloud server, I replace "localhost" with the server's external IP. I invariably get a timeout error from the browser.
Ensure that the port 8080 is open and listening inside the VM instance by running telnet, nmap, or netstat commands.
$ telnet localhost 8080
$ nmap <external-ip-vm-instance>
$ netstat -plant
If it is not listening, then this means that the application was not installed correctly.
Look at the Firewall Rule in the GCP to make sure that the VM instance allows ingress traffic to the port 8080.
Since you are running Ubuntu, make sure that the default Ubuntu firewall did not block the port 8080. If so, you have to allow access to the port 8080 by running the following command:
$ sudo ufw allow 8080/tcp

Installing Django, PostgreSQL on Google Compute Engine Debian 7 Instance

I am trying to deploy a Django application on Google Compute Engine. I'm using a Debian 7 image and want to set up Django with Nginx, Gunicorn, virtualenv, supervisor and PostgreSQL. I have everything running fine on my development machine which is running Ubuntu 14.04 with Django installed and PostgreSQL as the backend.
I'm using the tutorial located at http://datacommunitydc.org/blog/2013/12/a-tutorial-for-deploying-a-django-application-that-uses-numpy-and-scipy-to-google-compute-engine-using-apache2-and-modwsgi/. I'm also using the tutorial located at http://michal.karzynski.pl/blog/2013/06/09/django-nginx-gunicorn-virtualenv-supervisor/ as it's specific to virtualenv and PostgreSQL which I'm using on my development machine. I've setup my GCE instance, instaled and updated aptitude. I've installed PostgreSQL however when I attempt to create a database user and a new database for the app I get an error and nothing is created.
Following the tutorial I've run:
$ sudo su - postgres
postgres#django:~$ createuser -P
Enter name of role to add: hello_django
Enter password for new role:
Enter it again:
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n
When it attempts to create the new user role I receive the following error:
createuser: could not connect to database postgres: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
When I run the shell script ls /etc/init.d it says that postgresql is running, but I still can't add the new role. Can someone tell me what I'm doing wrong?
Regards.
I wasn't able to reproduce the issue on my end, but it appears to be an issue with PostgreSQL and its dependencies. You can try removing all installed PostgreSQL components and dependencies and then reinstalling PostgreSQL:
sudo apt-get remove --purge postgresql-9.1*
sudo apt-get install postgresql-9.1
If it's still unable to connect to the database, the issue might be originating from your $PATH, in which case you'll need to point it to /usr/local/bin/psql.
I have just had the same problem.
This is most likely cause the postgres cluster has not been initialised yet.
And the reason that this didn't install automatically is because you have set up the locale of the box yet. This is something that has to be done on Amazon EC2 instances as well.
You need to run:
sudo apt-get install locales
And then:
sudo dpkg-reconfigure locales
I had to choose which locales I wanted to setup, I chose en_AU UTF-8.
After this I rebooted, then I could run this to initialise the cluster:
sudo pg_createcluster 9.1 main --start
This started the service and created the pg_hba.conf files etc.
I faced a similar problem a while back. It can resolved using a few simple steps:
As postgres user run : initdb --locale en_US.UTF-8 -E UTF8 -D 'var/lib/postgres/data'. Note depending on the distro postgres in the command can be pgsql. You can easily check if the directory exists with an ls command.
systemctl start postgresql (if you have systemd) or just a /etc/init.d/postgresql start should do. These commands must be rub as the superuser.
All this is from the ArchWiki.