MERN stack app deployment to AWS EC2 instance - amazon-web-services

Hello I'm trying to set up my aws instance and deploy my mern app (Its not a static app) but I've found so many people doing different stuff and it got me a little bit confused, can anyone explains to me the process that I will have to go through to have a functional deployed mern app with aws? There is no need to go in details I just need someone to explain to me the basics.

Setting up an AWS server with NodeJS:
- Create instance.
- ssh into instance
- Git clone the repo
- Sudo apt-get update
- install npm
- npm install
- Add any env or required file that is in gitignore
- sudo ufw allow ssh
- sudo ufw allow 443/tcp
- sudo ufw allow 80/tcp
Setup PM2 and configure for port 80
- $ sudo npm install pm2 -g
- $ pm2 start index.js
- $ pm2 stop index
- Open up your apps index.js file and change port 5000(default) to port 80
- Also need to upload and configure certificate files to use port 443 with https
- $ sudo apt-get install libcap2-bin
- $ sudo setcap cap_net_bind_service=+ep `readlink -f \`which node\``
- $ pm2 start index

Related

Install Label Studio in Google Cloud and make it available with the public IP

I have a VM instance in Google Cloud with Ubuntu 20.04 LTS, I set it to allow HTTP traffic.
I need to setup Label Studio (https://github.com/heartexlabs/label-studio) in this VM so anyone can access it by just typing the VM public IP.
I already tried building it with docker:
sudo docker build -t heartexlabs/label-studio:latest .
But when i run it with:
sudo docker run -d -p 80:80 heartexlabs/label-studio:latest
It doesn't work, wheres the output of the container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2e728edbe6d5 heartexlabs/label-studio:latest "./tools/run.sh" About a minute ago Up About a minute 0.0.0.0:80->80/tcp, 8080/tcp gracious_shamir
I also tried to install it with pip and run it with:
label-studio start --host 34.66.116.52 --port 80 testproject
If anyone has experience with Google Cloud VM and can help set this up with docker or with a WSGI server I'd appreciate it
Try this:
sudo docker run --rm -d -p 80:8080 -v `pwd`/my_project:/label-studio/my_project --name label-studio heartexlabs/label-studio:latest label-studio start my_project --init
I was able to access it from External IP with this.

aws codebuild / using local redis (ubuntu image)

Getting into code build; currently looking to use redis on a local ubuntu image;
using the following script :
version: 0.2
phases:
install:
commands:
- apt update
- apt install -y redis-server wget
pre_build:
commands:
- wget https://raw.githubusercontent.com/Hronom/wait-for-redis/master/wait-for-redis.sh
- chmod +x ./wait-for-redis.sh
- service redis-server start
- ./wait-for-redis.sh localhost:6379
build:
commands:
- redis-cli info
- redis-cli info server
For now it seems to us that docker-compose is not ultimately required, we would first look into using it that way - expecting a standard ubuntu behaviour.
We're installing postgres with a similar approach, it does start properly and is fully usable.
Here we're unable to start redis properly, wait-for-redis keeps retrying (keep getting error Could not connect to Redis at localhost:6379: Connection refused)
With an ec2 linux image (yum based), we don't have such issue
What would be the correct way start redis in that ubuntu context ?
Just ran into the same problem.
When I added a cat /var/log/redis/*.log to the buildspec I discovered that Redis was not able to bind:
Creating Server TCP listening socket ::1:6379: bind: Cannot assign requested address
Further research showed this to be a known issue: https://github.com/redis/redis/issues/3241
... which can be fixed by adding these lines to the buildspec (before using redis):
- sed -i '/^bind/s/bind.*/bind 127.0.0.1/' /etc/redis/redis.conf
- service redis-server restart

Deploy docker container to digital ocean droplet from gitlab-ci

So here is what I want to do.
Push to master in git
Have gitlab-ci hear that push an start a pipeline
The pipeline builds code and pushes a docker container to the gitlab registry
The pipeline logs into a digital ocean droplet via ssh
The pipeline pulls the docker container from the gitlab registry
The pipeline starts the container
I can get up to step 4 no problem. But step 4 just fails every which way. I've tried the ssh key approach:
https://gitlab.com/gitlab-examples/ssh-private-key/blob/master/.gitlab-ci.yml
But that did not work.
So I tried a plain text password approach like this:
image: gitlab/dind:latest
before_script:
- apt-get update -y && apt-get install sshpass
stages:
- deploy
deploy:
stage: deploy
script:
- sshpass -p "mypassword" ssh root#x.x.x.x 'echo $HOME'
this version just exits with code 1 like so
Pseudo-terminal will not be allocated because stdin is not a terminal.
ln: failed to create symbolic link '/sys/fs/cgroup/systemd/name=systemd': Operation not permitted
/usr/local/bin/wrapdocker: line 113: 54 Killed docker daemon $DOCKER_DAEMON_ARGS &> /var/log/docker.log
Timed out trying to connect to internal docker host.
Is there a better way to do this? How can I at the very least access my droplet from inside the gitlab-ci build environment?
I just answered this related question: Create react app + Gitlab CI + Digital Ocean droplet - Pipeline succeeds but Docker container is deleted right after
Heres the solution he is using to get ssh creds set:
before_script:
## Install ssh agent (so we can access the Digital Ocean Droplet) and run it.
- apk update && apk add openssh-client
- eval $(ssh-agent -s)
## Write the environment variable value to the agent store, create the ssh directory and give the right permissions to it.
- echo "$SECRETS_DIGITAL_OCEAN_DROPLET_SSH_KEY" | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
## Make sure that ssh will trust the new host, instead of asking
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
## Test it!
- ssh -t ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP} 'echo $HOME'
Code credit goes to https://stackoverflow.com/users/6655011/leonardo-sarmento-de-castro

Can I run Google Monitoring Agent inside a Kubernetes Pod?

It seems that the Google Monitoring Agent (powered by Stackdriver) should be installed on each Node (i.e. each compute instance, i.e. each machine) of a Kubernetes cluster.
However the new plugins, like Nginx, Redis, ElasticSearch..., need those agents to know the IP of these services. This means having kube-proxy running and set up which should mean running that Google Monitoring Agent on a Pod.
These two conflict: On one side that agent monitors the entire machine, on the other it monitor services running on one or more machines.
Can these Stackdriver plugins work on a Google Container Engine (GKE) / Kubernetes cluster?
To monitor each machine (memory, CPU, disk...) it's possible to install the agent on each node (i.e. on each Compute Instance of your GKE cluster). Note that it'll not work with auto-scaling in the sense that re-created nodes won't have the agent installed.
To monitor services (number of requests/s, client connection...) it's possible to install the agent plugin in another container so that for example Nginx Pod run two containers:
Nginx
Google Monitoring Agent together with the Nginx plugin
Note: Not fully tested yet.
You can install the StackDriver Agent in your Dockerfile.
I have been able to get this working for a couchdb container as follows:
FROM klaemo/couchdb
RUN apt-get update
RUN apt-get install curl lsb-release -y
RUN curl -O https://repo.stackdriver.com/stack-install.sh
RUN apt-get install libyajl2 -y
COPY couchdb.conf /opt/stackdriver/collectd/etc/collectd.d/couchdb.conf
CMD bash stack-install.sh --write-gcm && service stackdriver-agent restart && couchdb
I had tried to use a Stackdriver container in a pod to collect stats about Nginx/Uwsgi in the same pod.
I had some findings that may be not so helpful. Just for your reference.
To create the stackdriver image, you may reference the docker file created by Keto.
https://hub.docker.com/r/keto/stackdriver/~/dockerfile/
FROM centos:centos7
MAINTAINER Mikael Keto
# add stackdriver repository
RUN curl -o /etc/yum.repos.d/stackdriver.repo https://repo.stackdriver.com/stackdriver-el7.repo
# install stackdriver
RUN yum -y install initscripts stackdriver-agent && yum clean all
RUN mkdir -p /var/lock/subsys; exit 0
ADD run.sh /run.sh
RUN chmod 755 /run.sh
CMD ["/run.sh"]
The run.sh is look like below,
#!/usr/bin/env bash
/opt/stackdriver/stack-config --write-gcm --no-start
/etc/init.d/stackdriver-agent start
while true; do
sleep 60
agent_pid=$(cat /var/run/stackdriver-agent.pid 2>/dev/null)
ps -p $agent_pid > /dev/null 2>&1
if [ $? != 0 ]; then
echo "Stackdriver agent pid not found!"
break;
fi
done
In the GKE/K8S deployment yaml file,
apiVersion: extensions/v1beta1
kind: Deployment
...
- name: stackdriver-agent
image: gcr.io/<project_id>/stackdriver-agent:<your_version>
command: ['/run.sh']
In my test, I found
It will report stats based on [node_name] instead of [container_name].
It will collect many system stats that are meaningful for a node, but since it is in a pod, it's quite pointless.
Well, I hope to find some way to collect both statistics of the pods and nodes that I need, but I didn't find a easy way to do that. What I did is do that by Google Python API library, but that takes too much time.
There is an other way to use Dockerfile.
When creating the docker image, pre-install necessary libraries for the stackdriver-agent installation.
FROM mongo
RUN apt-get update && apt-get install -y curl lsb-release
# COPY credential
COPY gcloud-credential.json /etc/google/auth/application_default_credentials.json
ENV GOOGLE_APPLICATION_CREDENTIALS "/etc/google/auth/application_default_credentials.json"
# download Stackdriver Agent installer
RUN curl -O https://repo.stackdriver.com/stack-install.sh
RUN chmod +x /stack-install.sh
# COPY stackdriver mongodb plugin
COPY mongodb.conf /opt/stackdriver/collectd/etc/collectd.d/mongodb.conf
Then install the agent using POD lifecycle.
spec:
containers:
- image: your_mongo_image
name: my-mongo
ports:
- containerPort: 27017
lifecycle:
postStart:
exec:
command: ["/stack-install.sh", "--write-gcm"]

How to create stun turn server instance using AWS EC2

Actually i wants to use my own stun/Turn server instance and i want to use Amazon EC2 .If anybody has any idea regarding this please share with me the steps to create or any reference link to follow.
do an ssh login to your ec2 instance, then run the below commands for installing and starting the turn server.
simple way:
sudo apt-get install coturn
If you say no, I want the latest cutting edge, you can download source code from their downloads page in install it yourself, example:
sudo -i # ignore if you already in admin mode
apt-get update && apt-get install libssl-dev libevent-dev libhiredis-dev make -y # install the dependencies
wget -O turn.tar.gz http://turnserver.open-sys.org/downloads/v4.5.0.3/turnserver-4.5.0.3.tar.gz # Download the source tar
tar -zxvf turn.tar.gz # unzip
cd turnserver-*
./configure
make && make install
sample command for running TURN server:
turnserver -a -o -v -n -u user:root -p 3478 -L INT_IP -r someRealm -X EXT_IP/INT_IP --no-dtls --no-tls
command description:
-X - your amazon instance's external IP, internal IP: EXT_IP/INT_IP
-p - port to be used, default 3478
-a - Use long-term credentials mechanism
-o - Run server process as daemon
-v - 'Moderate' verbose mode.
-n - no configuration file
--no-dtls - Do not start DTLS listeners
--no-tls - Do not start TLS listeners
-u - user credentials to be used
-r - default realm to be used, need for TURN REST API
in your WebRTC app, you can use trun server like:
{
url: 'turn:user#EXT_IP:3478',
credential: 'root'
}
One method to install a turnserver on Amazon EC2 would be to choose Debian and to install the coturn package, which is the successor of the RFC5766-server.
The configuration file at /etc/turnserver.conf includes EC2 specific instructions. The information provided within this file is very exhaustive in general and should answer the majority of configuration questions.
Once configured, the coturn server can be stopped an started however you would any other service.