Configuring AWS Elastic Beanstalk Timezone for Auto Scaling - amazon-web-services

I have a single instance server deployed on AWS - Elastic Beanstalk that needs timezone configuration, and I changed the timezone as logging into the EC2 environment with ssh, and update it with the linux commands listed below;
sudo rm /etc/localtime
sudo ln -sf /usr/share/zoneinfo/Europe/Istanbul /etc/localtime
sudo reboot
Everything is fine as the server is running as a single instance. The problem arose as I wanted to use Auto Scaling, Load Balancing feature. On single instance, updating the timezone on linux AMI is fine, but on auto scaling mode, because that the instances are created/destroyed/recreated according to the threshold metrics, all the configuration is lost.
My simple question is, how can I change/configure the timezone for an auto scalable, load balancing mode in AWS Elastic Beanstalk ?

you can configure the newly starting server with ebextensions.
Here's an example that works for me. Add the following command into the file .ebextensions/timezone.config:
commands:
set_time_zone:
command: ln -f -s /usr/share/zoneinfo/US/Pacific /etc/localtime

The answers here only managed to work for me partially (I had errors deploying when using the answers above). After some modifications, the following worked for me. I believe it has something to do with "cwd" and "permissions".
commands:
0000_0remove_localtime:
command: rm -rf /etc/localtime
0000_1change_clock:
command: sed -i 's/UTC/Asia\/Singapore/g' /etc/sysconfig/clock
cwd: /etc/sysconfig
0000_2link_singapore_timezone:
command: ln -f -s /usr/share/zoneinfo/Asia/Singapore /etc/localtime
cwd: /etc

For my first answer on StackOverflow ... I have to add new information to an excellent earlier answer.
For AWS Linux 2, Elastic Beanstalk, there is a new simple method of setting time. Add the following commands into the file .ebextensions/xxyyzz.config:
container_commands:
01_set_bne:
command: "sudo timedatectl set-timezone Australia/Brisbane"
command: "sudo systemctl restart crond.service"
I'm not sure if the second command is absolutely essential, but the instances certainly play nice with it there (especially with tasks due to happen right away !).

You can also configure it via ssh in the command line:
when connected to your Elastic Beanstalk Instance:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#change_time_zone
sudo ln -sf /usr/share/zoneinfo/America/Montreal /etc/localtime
You can connect to your EB instance with the eb command line tool.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-cmd-commands.html
eb ssh

Related

How to run a simple Docker container when an EC2 is launched in an AWS auto-scaling group?

$ terraform version
Terraform v0.14.4
I'm using Terraform to create an AWS autoscaling group, and it successfully launches an EC2 via a launch template, also created by the same Terraform plan. I added the following user_data definition in the launch template. The AMI I'm using already has Docker configured, and has the Docker image that I need.
user_data = filebase64("${path.module/docker_run.sh}")
and the docker_run.sh file contains simple
docker run -p 80:3000 -d 1234567890.dkr.ecr.us-east-1.amazonaws.com/node-app:latest
However, when I ssh to the EC2 instance, the container is NOT running. What am I missing?
Update:
Per Marcin's comment, I see the following in in /var/log/cloud-init-output.log
Jan 11 22:11:45 cloud-init[3871]: __init__.py[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: 'docker run -p 80:3000 -d...'
From AWS docs and what you've posted the likely reason is that you are missing /bin/bash in your docker_run.sh:
User data shell scripts must start with the #! characters and the path to the interpreter you want to read the script (commonly /bin/bash).
Thus your docker_run.sh should be:
#!/bin/bash
docker run -p 80:3000 -d 1234567890.dkr.ecr.us-east-1.amazonaws.com/node-app:latest
If this still fails, please check /var/log/cloud-init-output.log on the instance for errors.

How to create a health check for XRAY Daemon Task

I'm trying to implement XRAY for our AWS ECS spring boot application. To do so I'm creating a new task with a separate docker file just for the docker daemon as suggested by the AWS documentation and suggested when I asked another question on the Daemon setup.
However, when I try to deploy this to AWS, a health check endpoint is required for the load balancer is required to be able to determine that the service has been deployed successfully.
There is no health check functionality in the daemon itself. There's a thread on the AWS forums as well as an issue on the github repo related to this.
My initial idea is to create an application (probably spring-boot) that is able to determine if the daemon is running and expose a URL that the elb can hit to do a health check on the daemon. I can then deploy it along with the daemon.
Is there a better way to go about doing this? I worry about the need of creating a separate application just for creating a health check. There may be some hackiness required in order to run two entrypoint commands in the docker file as well.
Any ideas on a better way to accomplish this?
You don't need to use Load Balancer at all for X-Ray Docker Container Daemon since traffic is coming from cluster EC2 containers only. Healthcheck for X-Ray container can be done using AWS ECS Healthcheck itself.
Based on the forum answer, you can configure netstat on container healthcheck which will make sure if udp port is not opened by daemon container then ECS Agent will restart container.
Below is HealthCheck command you provide in ECS Task definition.
CMD-SHELL, netstat -aun | grep 2000 > /dev/null; if [ 0 != $? ]; then exit 1; fi;
Here is the setup and result.
Note--
If you are building X-Ray Docker image, please make sure you include netstat utility in Dockerfile otherwise health command will fail.
Example - if you are using Dockerfile gave in this documentation then you need to add net-tools package to your X-Ray container image.
Following is my updated Dockerfile which adds net-tools to image.
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y --force-yes --no-install-recommends apt-transport-https curl ca-certificates wget net-tools && apt-get clean && apt-get autoremove && rm -rf /var/lib/apt/lists/*
RUN wget https://s3.dualstack.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-3.x.deb
RUN dpkg -i aws-xray-daemon-3.x.deb
CMD ["/usr/bin/xray", "--bind=0.0.0.0:2000"]
EXPOSE 2000/udp

Unable to install Webgoat on AWS. I get error about Dockerfile and Dockerrun.aws.json

I am trying to install webgoat on AWS. I am following the instructions given on https://github.com/WebGoat/WebGoat
I can get it up and running on my local box. But when I try to deploy it on AWS it gives error and complains about Dockerfile and Dockerrun.aws.json.
I go to elastic beanstalk. Then I create an application (of docker type). It asks me for the code and I give it the zip file from github. After several minutes it gives errors about Dockerfile and Dockerrun.aws.json.
Webgoat has several Dockerfiles, but no Dockerrun.aws.json. I am not sure how to resolve this.
What is the best way to deploy webgoat in aws?
Will appreciate any help I can get.
Finally I was able to install it using the info provided on these two sources.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html
and https://github.com/WebGoat/WebGoat
Here are the steps:
sudo yum update -y
sudo yum install -y docker
sudo service docker start
sudo usermod -a -G docker ec2-user (Restart the server)
sudo docker pull webgoat/webgoat-8.0
sudo docker run -p 80:8080 -it webgoat/webgoat-8.0 /home/webgoat/start.sh
Make sure to modify the security group associated with the aws instance to allow http traffic. After that you should be able to access the app with this url:
http://:80/WebGoat/login

How to use commands in AWS EC2 elastic beanstalk

I am trying to fix a problem I have with my program I uploaded to AWS in elastic beanstalk tomcat. I found someone who seems to have had a similar problem, but I can't find where I execute their solution.
AWS EC2 tomcat permission denied creating/writing to file
The answer said that I should use the following commands:
chmod o+x /home
chmod o+x /home/ec2-user
I want to see if this will fix my problem, however I have looked everywhere and have found no information regarding where I actually put these commands.
Is your problem fixed if you run those commands manually? (i.e. eb ssh into your instance and then sudo chmod o+x /home then sudo chmod o+x /home/ec2-user)
If so, you could automate running those commands using an EB extension file. The documentation is here but it would look something like this:
.ebextensions/01-fix-permissions.config
commands:
fix_home_permissions:
command: "chmod o+x /home"
fix_ec2user_permissions:
command: "chmod o+x /home/ec2-user"

CoreOS AWS userdata "docker run" on startup won't run

I'm trying to setup CoreOS on AWS to run specific commands on boot to download our DCOS cluster's info tarball and run scripts contained within it. These scripts help add the instance as an "agent" to our DC/OS cluster.
However, I don't seem to be able to get the docker run commands to run. I do see that the userdata is creating the tee's output file (which remains empty) and also the /opt/dcos_install_tmp/ directory (also remains empty).
The docker run commands here download an "awscli" container, fetch packages from S3 (using IAM instance profile credentials), and spit it out to the CoreOS file system.
Installing AWS CLI on CoreOS didn't seem straightforward (there's no package manager, no python), so I had to resort to this.
If I login to the instance and run the same commands by putting them in a script, I have absolutely no issues.
I check "journalctl --identifier=coreos-cloudinit" and found nothing to indicate issues. It just reports:
15:58:34 Parsing user-data as script
There is no "boot" log file for CoreOS in /var/log/ unlike in other AMIs.
I'm really stuck right now and would love some nudges in the right direction.
Here's my userdata (which I post as text during instance boot):
#!/bin/bash
/usr/bin/docker run -it --name cli governmentpaas/awscli aws s3 cp s3://<bucket>/dcos/dcos_preconfig.sh /root && /usr/bin/docker cp cli:/root/dcos_preconfig.sh . && /usr/bin/docker rm cli | tee -a /root/userdatalog.txt
/usr/bin/docker run -it --name cli governmentpaas/awscli aws s3 cp s3://<bucket>/dcos/dcos-install.tar /root && /usr/bin/docker cp cli:/root/dcos-install.tar . && /usr/bin/docker rm cli | tee -a /root/userdatalog.txt
sudo mkdir -p /opt/dcos_install_tmp
sudo tar xf dcos-install.tar -C /opt/dcos_install_tmp | tee -a /root/userdatalog.txt
sudo /bin/bash /opt/dcos_install_tmp/dcos_install.sh slave | tee -a /root/userdatalog.txt
Remove -t flag from the docker run command.
I had a similar problem: DigitalOcean: How to run Docker command on newly created Droplet via Java API
The problem ended up being the -t flag in the docker run command. Apparently this doesn't work because it isn't a terminal or something like that. Remove the flag and it runs fine.