Actually i wants to use my own stun/Turn server instance and i want to use Amazon EC2 .If anybody has any idea regarding this please share with me the steps to create or any reference link to follow.
do an ssh login to your ec2 instance, then run the below commands for installing and starting the turn server.
simple way:
sudo apt-get install coturn
If you say no, I want the latest cutting edge, you can download source code from their downloads page in install it yourself, example:
sudo -i # ignore if you already in admin mode
apt-get update && apt-get install libssl-dev libevent-dev libhiredis-dev make -y # install the dependencies
wget -O turn.tar.gz http://turnserver.open-sys.org/downloads/v4.5.0.3/turnserver-4.5.0.3.tar.gz # Download the source tar
tar -zxvf turn.tar.gz # unzip
cd turnserver-*
./configure
make && make install
sample command for running TURN server:
turnserver -a -o -v -n -u user:root -p 3478 -L INT_IP -r someRealm -X EXT_IP/INT_IP --no-dtls --no-tls
command description:
-X - your amazon instance's external IP, internal IP: EXT_IP/INT_IP
-p - port to be used, default 3478
-a - Use long-term credentials mechanism
-o - Run server process as daemon
-v - 'Moderate' verbose mode.
-n - no configuration file
--no-dtls - Do not start DTLS listeners
--no-tls - Do not start TLS listeners
-u - user credentials to be used
-r - default realm to be used, need for TURN REST API
in your WebRTC app, you can use trun server like:
{
url: 'turn:user#EXT_IP:3478',
credential: 'root'
}
One method to install a turnserver on Amazon EC2 would be to choose Debian and to install the coturn package, which is the successor of the RFC5766-server.
The configuration file at /etc/turnserver.conf includes EC2 specific instructions. The information provided within this file is very exhaustive in general and should answer the majority of configuration questions.
Once configured, the coturn server can be stopped an started however you would any other service.
Related
I am trying to connect my raspberry pi4 device running raspy OS lite with AWS Iot Greengrass v2 and i do following steps:
From AWS Greengrass console i setup a core device
On my raspberry i install Java 8 runtime
$ sudo apt.get update
$ sudo apt-get install openjdk-8-jdk
On my raspberry i download the installer:
curl -s https://d2s8p88vqu9w66.cloudfront.net/releases/greengrass-nucleus-latest.zip > greengrass-nucleus-latest.zip && unzip greengrass-nucleus-latest.zip -d GreengrassCore
On my device i run the installer:
sudo -E java -Droot="/greengrass/v2" -Dlog.store=FILE -jar ./GreengrassCore/lib/Greengrass.jar --aws-region eu-west-1 --thing-name GreengrassQuickStartCore-1773dec1ad2 --thing-group-name GreengrassQuickStartGroup --component-default-user ggc_user:ggc_group --provision true --setup-system-service true --deploy-dev-tools true
All seems to be done, my core device was created in aws console and status is "Healty" but on my raspberry the folder /greengrass/v2 does not exist and i cannot see logs etc.
If i read documentation for troubleshooting device issues everyone report /greengrass/v2/logs/ as a log folder but on my device greengrass folder does not exist.
Everyone have some suggestion about?
So many thanks in advance
Did you install the AWS CLI V1 (the V2 version is not supported on the raspberry pi). Be sure to do this before installing Greengrass Core software.
$ curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
$ unzip awscli-bundle.zip
$ sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
Had a similar error, be careful with the paths, sometimes you are using relative and absolute paths.
Example: GGv2 folder in the filesystem root directory (/greengrass/v2)
cd /greengrass/v2
Example: GGv2 folder relative to the current directory
cd ./greengrass/v2
Example: GGv2 folder at current user home directory (/usr/home/greengrass/v2)
cd ~/greengrass/v2
I assume your log files shall be located at filesystem root:
cd /greengrass/v2/logs
If you cannot access the logs folder, try changing its permissions:
sudo chmod 755 /greengrass/v2/logs
cd /greengrass/v2/logs
Hello I'm trying to set up my aws instance and deploy my mern app (Its not a static app) but I've found so many people doing different stuff and it got me a little bit confused, can anyone explains to me the process that I will have to go through to have a functional deployed mern app with aws? There is no need to go in details I just need someone to explain to me the basics.
Setting up an AWS server with NodeJS:
- Create instance.
- ssh into instance
- Git clone the repo
- Sudo apt-get update
- install npm
- npm install
- Add any env or required file that is in gitignore
- sudo ufw allow ssh
- sudo ufw allow 443/tcp
- sudo ufw allow 80/tcp
Setup PM2 and configure for port 80
- $ sudo npm install pm2 -g
- $ pm2 start index.js
- $ pm2 stop index
- Open up your apps index.js file and change port 5000(default) to port 80
- Also need to upload and configure certificate files to use port 443 with https
- $ sudo apt-get install libcap2-bin
- $ sudo setcap cap_net_bind_service=+ep `readlink -f \`which node\``
- $ pm2 start index
I using docker container and docker-compose, to create ELK containers, after the containers created i should inject file into logstash and display it via docker
I'm havent work on docker until three days ago, i working at this problem, surfed at least 10 websites+youtube and cant understand what should i do.
I sucssesed in creatind docker container, install/create (not sure how to say it) docker-compose.
I have pulled the docker-elk/ from git, so i have ready yml files for docker-compose, logstash, kibana and elastic search, i have tried to push file into logstash but i cant get if i did it right, and how to check it at all
i saw an option to check ip addresses of running containers and run it via ip:5061, ip:9200 but nothing have worked
i have installed docker and pulled docker elk
sudo amazon-linux-extras install docker
Download docker-elk:
git clone https://github.com/deviantony/docker-elk
sudo curl -L
downloaded docker compose
https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo mv /usr/local/bin/docker-compose /usr/bin/docker-compose
sudo chmod +x /usr/bin/docker-compose
and created elk containers- i have tried two commands, the second one worked #better
sudo docker-compose -d
sudo docker-compose -f /full addres/ docker-compose.yml up
I expect to show injected into logstash log file via kibana graph
what you need is a log shipper like filebeat and that do not comes with the ELK stack. after you configure your file beate to send logs to logstash you can see the logs
I am trying to create a job to connect sftp server from aws services to bring files into s3 storage in aws. It will be an automated job which runs every night and bring data into S3. I have seen documentation about how to connect aws and import data into S3 manually. However there is nothing I found about connecting external SFTP server to bring data into S3. I don't know if it is doable?
You can now use the managed SFTP service by AWS. It provides a fully managed SFTP server which is easy to setup and is reliable, scalable and durable. It uses S3 as backend for storing files.
Use S3FS to configure sftp connection directly to S3.
All you need to do is install S3FS
https://github.com/s3fs-fuse/s3fs-fuse/wiki/Installation-Notes
Install dependencies for fuse and s3cmd.
CentOS/RHEL Users:
# yum install gcc libstdc++-devel gcc-c++ curl-devel libxml2-devel openssl-devel mailcap
Ubuntu Users:
$ sudo apt-get install build-essential libcurl4-openssl-dev libxml2-dev mime-support
Download and Compile latest fuse
https://github.com/libfuse/libfuse/releases/download/fuse-2.9.7/fuse-2.9.7.tar.gz
# cd fuse-2.9.7
# ./configure --prefix=/usr/local
# make && make install
# export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
# ldconfig
# modprobe fuse
Download and Compile latest S3FS
https://code.google.com/archive/p/s3fs/downloads
# cd /usr/src/
# wget https://s3fs.googlecode.com/files/s3fs-1.74.tar.gz
# tar xzf s3fs-1.74.tar.gz
# cd s3fs-1.74
# ./configure --prefix=/usr/local
# make && make install
4. setup Access Keys
# echo AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY > ~/.passwd-s3fs
# chmod 600 ~/.passwd-s3fs
Mount S3 Bucket
# mkdir /tmp/cache
# mkdir /s3mnt
# chmod 777 /tmp/cache /s3mnt
# s3fs -o use_cache=/tmp/cache mydbbackup /s3mnt
Make your mount point as ftp user home directory this will direct the files transferred using sftp to S3.
NOTE: Donot forget to add permissions to your S3 Bucket to allow Authenticated AWS users
It seems that the Google Monitoring Agent (powered by Stackdriver) should be installed on each Node (i.e. each compute instance, i.e. each machine) of a Kubernetes cluster.
However the new plugins, like Nginx, Redis, ElasticSearch..., need those agents to know the IP of these services. This means having kube-proxy running and set up which should mean running that Google Monitoring Agent on a Pod.
These two conflict: On one side that agent monitors the entire machine, on the other it monitor services running on one or more machines.
Can these Stackdriver plugins work on a Google Container Engine (GKE) / Kubernetes cluster?
To monitor each machine (memory, CPU, disk...) it's possible to install the agent on each node (i.e. on each Compute Instance of your GKE cluster). Note that it'll not work with auto-scaling in the sense that re-created nodes won't have the agent installed.
To monitor services (number of requests/s, client connection...) it's possible to install the agent plugin in another container so that for example Nginx Pod run two containers:
Nginx
Google Monitoring Agent together with the Nginx plugin
Note: Not fully tested yet.
You can install the StackDriver Agent in your Dockerfile.
I have been able to get this working for a couchdb container as follows:
FROM klaemo/couchdb
RUN apt-get update
RUN apt-get install curl lsb-release -y
RUN curl -O https://repo.stackdriver.com/stack-install.sh
RUN apt-get install libyajl2 -y
COPY couchdb.conf /opt/stackdriver/collectd/etc/collectd.d/couchdb.conf
CMD bash stack-install.sh --write-gcm && service stackdriver-agent restart && couchdb
I had tried to use a Stackdriver container in a pod to collect stats about Nginx/Uwsgi in the same pod.
I had some findings that may be not so helpful. Just for your reference.
To create the stackdriver image, you may reference the docker file created by Keto.
https://hub.docker.com/r/keto/stackdriver/~/dockerfile/
FROM centos:centos7
MAINTAINER Mikael Keto
# add stackdriver repository
RUN curl -o /etc/yum.repos.d/stackdriver.repo https://repo.stackdriver.com/stackdriver-el7.repo
# install stackdriver
RUN yum -y install initscripts stackdriver-agent && yum clean all
RUN mkdir -p /var/lock/subsys; exit 0
ADD run.sh /run.sh
RUN chmod 755 /run.sh
CMD ["/run.sh"]
The run.sh is look like below,
#!/usr/bin/env bash
/opt/stackdriver/stack-config --write-gcm --no-start
/etc/init.d/stackdriver-agent start
while true; do
sleep 60
agent_pid=$(cat /var/run/stackdriver-agent.pid 2>/dev/null)
ps -p $agent_pid > /dev/null 2>&1
if [ $? != 0 ]; then
echo "Stackdriver agent pid not found!"
break;
fi
done
In the GKE/K8S deployment yaml file,
apiVersion: extensions/v1beta1
kind: Deployment
...
- name: stackdriver-agent
image: gcr.io/<project_id>/stackdriver-agent:<your_version>
command: ['/run.sh']
In my test, I found
It will report stats based on [node_name] instead of [container_name].
It will collect many system stats that are meaningful for a node, but since it is in a pod, it's quite pointless.
Well, I hope to find some way to collect both statistics of the pods and nodes that I need, but I didn't find a easy way to do that. What I did is do that by Google Python API library, but that takes too much time.
There is an other way to use Dockerfile.
When creating the docker image, pre-install necessary libraries for the stackdriver-agent installation.
FROM mongo
RUN apt-get update && apt-get install -y curl lsb-release
# COPY credential
COPY gcloud-credential.json /etc/google/auth/application_default_credentials.json
ENV GOOGLE_APPLICATION_CREDENTIALS "/etc/google/auth/application_default_credentials.json"
# download Stackdriver Agent installer
RUN curl -O https://repo.stackdriver.com/stack-install.sh
RUN chmod +x /stack-install.sh
# COPY stackdriver mongodb plugin
COPY mongodb.conf /opt/stackdriver/collectd/etc/collectd.d/mongodb.conf
Then install the agent using POD lifecycle.
spec:
containers:
- image: your_mongo_image
name: my-mongo
ports:
- containerPort: 27017
lifecycle:
postStart:
exec:
command: ["/stack-install.sh", "--write-gcm"]