I installed aws cloudwatch agent on my ubuntu server and unfortunately deleted /opt/aws/amazon-cloudwatch-agent/ folder by myself. How can i regenerate it. I tried to below to commands and but it's not success.
wget https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
dpkg -i -E ./amazon-cloudwatch-agent.deb
How can i regenerate those files?
Related
I am trying to customize Amazon SageMaker Notebook Instances using Lifecycle Configurations because I need to install additional pip packages. What it means is I have to create a on-start.sh and on-create.sh script within a lifecycle configuration. You can see a sample here.
Now, I have many packages and the installation time might go over 5 minutes, causing a potential timeout. It is suggested to use nohup to run the script as a background job in that case.
But how do I run this with a nohup since I do not have a terminal in this case [see above screenshot]? Is there a way to run the script as a background job from within the script? Anything else I am missing? Please suggest
I have done this before, install many libraries for around 15 minutes. I wrapped the script I actually want to run in a create.sh and run that create.sh using nohup. Now the logs of these you can view on cloudwatch and also sagemaker start wont time out with a plus that you will have nohup.out file where you executed the nohup.
Below I wrapped script in https://github.com/aws-samples/amazon-sagemaker-notebook-instance-lifecycle-config-samples/tree/master/scripts/export-to-pdf-enable into create.sh
#!/bin/bash
set -e
cat <<'EOF'>create.sh
#!/bin/bash
sudo -u ec2-user -i <<'EOF'
set -e
# OVERVIEW
# This script enables Jupyter to export a notebook directly to PDF.
# nbconvert depends on XeLaTeX and several LaTeX packages that are non-trivial to
# install because `tlmgr` is not included with the texlive packages provided by yum.
# REQUIREMENTS
# Internet access is required in on-create.sh in order to fetch the latex libraries from the ctan mirror.
sudo yum install -y texlive*
unset SUDO_UID
ln -s /home/ec2-user/SageMaker/.texmf /home/ec2-user/texmf
EOF
echo 'EOF' >> create.sh
nohup bash create.sh &
I am trying to connect my raspberry pi4 device running raspy OS lite with AWS Iot Greengrass v2 and i do following steps:
From AWS Greengrass console i setup a core device
On my raspberry i install Java 8 runtime
$ sudo apt.get update
$ sudo apt-get install openjdk-8-jdk
On my raspberry i download the installer:
curl -s https://d2s8p88vqu9w66.cloudfront.net/releases/greengrass-nucleus-latest.zip > greengrass-nucleus-latest.zip && unzip greengrass-nucleus-latest.zip -d GreengrassCore
On my device i run the installer:
sudo -E java -Droot="/greengrass/v2" -Dlog.store=FILE -jar ./GreengrassCore/lib/Greengrass.jar --aws-region eu-west-1 --thing-name GreengrassQuickStartCore-1773dec1ad2 --thing-group-name GreengrassQuickStartGroup --component-default-user ggc_user:ggc_group --provision true --setup-system-service true --deploy-dev-tools true
All seems to be done, my core device was created in aws console and status is "Healty" but on my raspberry the folder /greengrass/v2 does not exist and i cannot see logs etc.
If i read documentation for troubleshooting device issues everyone report /greengrass/v2/logs/ as a log folder but on my device greengrass folder does not exist.
Everyone have some suggestion about?
So many thanks in advance
Did you install the AWS CLI V1 (the V2 version is not supported on the raspberry pi). Be sure to do this before installing Greengrass Core software.
$ curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
$ unzip awscli-bundle.zip
$ sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
Had a similar error, be careful with the paths, sometimes you are using relative and absolute paths.
Example: GGv2 folder in the filesystem root directory (/greengrass/v2)
cd /greengrass/v2
Example: GGv2 folder relative to the current directory
cd ./greengrass/v2
Example: GGv2 folder at current user home directory (/usr/home/greengrass/v2)
cd ~/greengrass/v2
I assume your log files shall be located at filesystem root:
cd /greengrass/v2/logs
If you cannot access the logs folder, try changing its permissions:
sudo chmod 755 /greengrass/v2/logs
cd /greengrass/v2/logs
When I install the AWS CLI for the root user on CENTOS 7, it installs it to /usr/local/bin as with other users. Problem is though, /usr/local/bin isn't in $PATH for the root user. At first I thought this was a bug in CENTOS, one that has been around for a very long time, but it's also possible its for reasons of security, I don't know.
What would be best practice then to install the AWS CLI for the root user?
To complement Chris'es answer, you can install the AWS CLI v2 in a folder visible to root, such as /usr/local/sbin as follows:
sudo yum install unzip
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --bin-dir /usr/local/sbin
then confirm with:
aws --version
which should produce:
aws-cli/2.0.44 Python/3.7.3 Linux/3.10.0-1127.el7.x86_64 exe/x86_64.centos.7
This appears to a bug logged in CentOS since 2012 in CentOS 6 but as of yet has not been fixed.
Regarding running AWS CLI as root, you can still run it by running /usr/local/bin/aws although I get that this is not ideal. Additionally you should try to avoid running AWS CLI as root if possible, instead run it as a named user.
According to the documentation you can use either --bin-dir or -b to specify a different bin directory so you could check a path that both root and named users have in their $PATH variable.
What worked for me was
sudo ./aws/install --bin-dir /usr/bin
I using docker container and docker-compose, to create ELK containers, after the containers created i should inject file into logstash and display it via docker
I'm havent work on docker until three days ago, i working at this problem, surfed at least 10 websites+youtube and cant understand what should i do.
I sucssesed in creatind docker container, install/create (not sure how to say it) docker-compose.
I have pulled the docker-elk/ from git, so i have ready yml files for docker-compose, logstash, kibana and elastic search, i have tried to push file into logstash but i cant get if i did it right, and how to check it at all
i saw an option to check ip addresses of running containers and run it via ip:5061, ip:9200 but nothing have worked
i have installed docker and pulled docker elk
sudo amazon-linux-extras install docker
Download docker-elk:
git clone https://github.com/deviantony/docker-elk
sudo curl -L
downloaded docker compose
https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo mv /usr/local/bin/docker-compose /usr/bin/docker-compose
sudo chmod +x /usr/bin/docker-compose
and created elk containers- i have tried two commands, the second one worked #better
sudo docker-compose -d
sudo docker-compose -f /full addres/ docker-compose.yml up
I expect to show injected into logstash log file via kibana graph
what you need is a log shipper like filebeat and that do not comes with the ELK stack. after you configure your file beate to send logs to logstash you can see the logs
Actually i wants to use my own stun/Turn server instance and i want to use Amazon EC2 .If anybody has any idea regarding this please share with me the steps to create or any reference link to follow.
do an ssh login to your ec2 instance, then run the below commands for installing and starting the turn server.
simple way:
sudo apt-get install coturn
If you say no, I want the latest cutting edge, you can download source code from their downloads page in install it yourself, example:
sudo -i # ignore if you already in admin mode
apt-get update && apt-get install libssl-dev libevent-dev libhiredis-dev make -y # install the dependencies
wget -O turn.tar.gz http://turnserver.open-sys.org/downloads/v4.5.0.3/turnserver-4.5.0.3.tar.gz # Download the source tar
tar -zxvf turn.tar.gz # unzip
cd turnserver-*
./configure
make && make install
sample command for running TURN server:
turnserver -a -o -v -n -u user:root -p 3478 -L INT_IP -r someRealm -X EXT_IP/INT_IP --no-dtls --no-tls
command description:
-X - your amazon instance's external IP, internal IP: EXT_IP/INT_IP
-p - port to be used, default 3478
-a - Use long-term credentials mechanism
-o - Run server process as daemon
-v - 'Moderate' verbose mode.
-n - no configuration file
--no-dtls - Do not start DTLS listeners
--no-tls - Do not start TLS listeners
-u - user credentials to be used
-r - default realm to be used, need for TURN REST API
in your WebRTC app, you can use trun server like:
{
url: 'turn:user#EXT_IP:3478',
credential: 'root'
}
One method to install a turnserver on Amazon EC2 would be to choose Debian and to install the coturn package, which is the successor of the RFC5766-server.
The configuration file at /etc/turnserver.conf includes EC2 specific instructions. The information provided within this file is very exhaustive in general and should answer the majority of configuration questions.
Once configured, the coturn server can be stopped an started however you would any other service.