An error occurs when installing docker on AWS EC2 and creating an instance node - amazon-web-services

I installed Docker to install Keycloak on AWS EC2. However, the following error occurs when creating an instance node.
$ docker-machine create --driver amazonec2 aws-node1
Running pre-create checks...
Creating machine...
(aws-node1) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Error creating machine: Error running provisioning: error installing docker:
[ec2-user#ip-172-31-43-97 ~]$
The full installation procedure is as follows.
$ sudo yum -y install docker
$ docker –v
Docker version 20.10.17, build 100c701
$ sudo service docker start
$ sudo usermod -aG docker ec2-user
$ sudo curl -L https://github.com/docker/compose/releases/download/1.25.0\
-rc2/docker-compose-`uname -s`-`uname -m` -o \
/usr/local/bin/docker-compose
$ docker-compose –v
docker-compose version 1.25.0-rc2, build 661ac20e
$ base=https://github.com/docker/machine/releases/download/v0.16.0 &&
curl -L $base/docker-machine-$(uname -s)-$(uname -m) >/tmp/docker-machine &&
sudo install /tmp/docker-machine /usr/local/bin/docker-machine
$ docker-machine –v
docker-machine version 0.16.0, build 702c267f
$ aws configure
AWS Access Key ID [None]: [My Access Key ID]
AWS Secret Access Key [None]: [My Secret Access Key]
Default region name [None]: ap-northeast-2
Default output format [None]:
$ docker-machine create --driver amazonec2 aws-node1
Running pre-create checks...
Creating machine...
(aws-node1) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Error creating machine: Error running provisioning: error installing docker:
I set up "aws configure" by creating an Access Key ID and Secret Access Key for the CLI. However, Docker instance node is not created no matter what.
EC2 was created with "Amazon Linux 2 AMI (HVM) - Kernel 5.10, SSD Volume Type".

Related

Start Amazon CloudWatch Agent at instance startup

I have an Amazon Linux AMI Instance.
I installed CloudWatch Agent and start the service using this command.
/opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json -s
I can check the status of the CloudWatch Agent using this command and see that it's running
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -m ec2 -a status
However, I cannot find the service using
service amazon-cloudwatch-agent status
I want to set so that the CloudWatch Agent process start every time the instance starts. Something along the lines of systemctl enable amazon-cloudwatch-agent, but I don't have systemctl installed in my machine.
Is there any way I can setup so that the CloudWatch Agent starts every time the instance starts?

Why is the aws cli not found on amazonlinux2 ami?

The AmazonLinux2 AMI I am using for my Docker hosts does not appear to have the AWS CLI installed. The log has an error from the user data script that tries to run an aws command:
/var/lib/cloud/instance/scripts/part-001: line 7: aws: command not found
Then I connected with SSH to the instance for a sanity check, and aws is definitely not found:
[ec2-user#ip-X-X-X-X ~]$ cat /etc/os-release
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
[ec2-user#ip-X-X-X-X ~]$ aws --version
-bash: aws: command not found
I thought the AWS CLI was installed by default on all AmazonLinux AMIs? I don't remember ever having to install it myself before.
This article even says that the CLI v1 is installed by default on AmazonLinux2:
AWS Docs Link
So how is it possible that it's not found on this stock AMI? Do only some of the AmazonLinux2 AMIs have the CLI pre-installed? For reference, I am using this AMI:
amzn2-ami-minimal-hvm-2.0.20200917.0-x86_64-ebs (ami-0a6993b2978bd23cb)
From this post on AWS forum:
Minimal has a smaller set of packages installed by default. For example, a lot of AWS specific packages are installed on the default for easy integration to other AWS services. The minimal do not have these installed. This gives a much lower footprint for those who are not directly interacting with other AWS services, or who want to cherry-pick which ones they install.
If you want awscli, you can install it:
sudo yum install -y awscli
to install the latest version of the awscli (v2) see this doc
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

AWS Elastic Beanstalk CLI "eb ssh" send command to instance when opening it via itermocil

eb ssh -n 1
would connect to the currently selected env and instance 1 in the list of instances.
is it somehow possible to execute a command once the shell is open?
I'm using itermocil and would like to automatically execute a tail -
Right now my config looks like this:
windows:
- name: general
root: ~/Documents/LocalProjects/project
layout: tiled
panes:
- commands:
- cd web
- eb ssh -n 1
- commands:
- cd worker
- eb ssh -n 1
It seems it's possible with newer version of eb cli
➜ eb --version
EB CLI 3.10.2 (Python 2.7.1)
➜ eb ssh --command "pwd"
INFO: Running ssh -i /Users/xxx/.ssh/xxx ec2-user#0.1.2.3 pwd
/home/ec2-user
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-ssh.html

Configuring AWS Elastic Beanstalk Timezone for Auto Scaling

I have a single instance server deployed on AWS - Elastic Beanstalk that needs timezone configuration, and I changed the timezone as logging into the EC2 environment with ssh, and update it with the linux commands listed below;
sudo rm /etc/localtime
sudo ln -sf /usr/share/zoneinfo/Europe/Istanbul /etc/localtime
sudo reboot
Everything is fine as the server is running as a single instance. The problem arose as I wanted to use Auto Scaling, Load Balancing feature. On single instance, updating the timezone on linux AMI is fine, but on auto scaling mode, because that the instances are created/destroyed/recreated according to the threshold metrics, all the configuration is lost.
My simple question is, how can I change/configure the timezone for an auto scalable, load balancing mode in AWS Elastic Beanstalk ?
you can configure the newly starting server with ebextensions.
Here's an example that works for me. Add the following command into the file .ebextensions/timezone.config:
commands:
set_time_zone:
command: ln -f -s /usr/share/zoneinfo/US/Pacific /etc/localtime
The answers here only managed to work for me partially (I had errors deploying when using the answers above). After some modifications, the following worked for me. I believe it has something to do with "cwd" and "permissions".
commands:
0000_0remove_localtime:
command: rm -rf /etc/localtime
0000_1change_clock:
command: sed -i 's/UTC/Asia\/Singapore/g' /etc/sysconfig/clock
cwd: /etc/sysconfig
0000_2link_singapore_timezone:
command: ln -f -s /usr/share/zoneinfo/Asia/Singapore /etc/localtime
cwd: /etc
For my first answer on StackOverflow ... I have to add new information to an excellent earlier answer.
For AWS Linux 2, Elastic Beanstalk, there is a new simple method of setting time. Add the following commands into the file .ebextensions/xxyyzz.config:
container_commands:
01_set_bne:
command: "sudo timedatectl set-timezone Australia/Brisbane"
command: "sudo systemctl restart crond.service"
I'm not sure if the second command is absolutely essential, but the instances certainly play nice with it there (especially with tasks due to happen right away !).
You can also configure it via ssh in the command line:
when connected to your Elastic Beanstalk Instance:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#change_time_zone
sudo ln -sf /usr/share/zoneinfo/America/Montreal /etc/localtime
You can connect to your EB instance with the eb command line tool.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-cmd-commands.html
eb ssh

403 Forbidden when downloading a container image from the registry outside GCE

I am trying to download an image from the Google container registry in a CoreOS machine running in other server (not GCE).
I configured a new service account:
core#XXXX ~ $ docker run -t -i -v $(pwd)/keys:/tmp/keys --name gcloud-config ernestoalejo/google-cloud-sdk-with-docker gcloud auth activate-service-account XXXXXXX#developer.gserviceaccount.com --key-file /tmp/keys/key.p12 --project XXXX
Activated service account credentials for: [XXXXXXX#developer.gserviceaccount.com]
The account is active, but when I try to download the container image it returns a forbidden HTTP status.
core#XXXX ~ $ /usr/bin/docker run --volumes-from gcloud-config --rm -v /var/run/docker.sock:/var/run/docker.sock ernestoalejo/google-cloud-sdk-with-docker sh -c "gcloud preview docker pull gcr.io/XXXXX/influxdb"
Pulling repository gcr.io/XXXXX/influxdb
time="2015-05-08T06:38:55Z" level="fatal" msg="HTTP code: 403"
ERROR: (gcloud.preview.docker) A Docker command did not run successfully.
Tried to run: 'docker pull gcr.io/XXXXX/influxdb'
Exit code: 1
There is only one account in the server and is correctly configured:
core#XXXX ~ $ /usr/bin/docker run --volumes-from gcloud-config --rm -v /var/run/docker.sock:/var/run/docker.sock ernestoalejo/google-cloud-sdk-with-docker sh -c "gcloud auth list"
To set the active account, run:
$ gcloud config set account ``ACCOUNT''
Credentialed accounts:
- XXXXXXXXXXXXX#developer.gserviceaccount.com (active)
How can I authorize the external machine to download images from the registry?
NOTE: The image ernestoalejo/google-cloud-sdk-with-docker is the same as google/cloud-sdk but with this issue fixed.
UPDATE: I have also tried the solution of this answer, but it makes no difference.
PROJECT_ID=XXXXXX
ROBOT=XXXXXX#developer.gserviceaccount.com
gsutil acl ch -u $ROBOT:R gs://artifacts.$PROJECT_ID.appspot.com
gsutil -m acl ch -R -u $ROBOT:R gs://artifacts.$PROJECT_ID.appspot.com
gsutil defacl ch -u $ROBOT:R gs://artifacts.$PROJECT_ID.appspot.com
It seems that the new Frankfurt region of Digital Ocean can't access the Google Container Registry at all. It always returns a 403 Forbidden. As soon as I used a server in London everything started working.