CloudWatch logs with Amazon Linux 2 - amazon-web-services

I am upgrading my Elastic Beanstalk environment to use Amazon Linux 2.
In my old environment, I could monitor my Spring Boot application logs by watching the log group using cw.exe /aws/elasticbeanstalk/myapp/var/log/eb-docker/containers/eb-current-app/stdouterr.log
Now, however, no logs are displayed for the new application, and furthermore I notice that the stdouterr.log in /eb-current-app/ seems to prepend the instance ID of the log.
What do I need to do restore the previous behavior so I can monitor my logs?

Cloudwatch logs is available as a yum package now in Amazon Linux 2:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html
sudo yum install -y awslogs
You might have to edit this file /etc/awslogs/awscli.conf too to change the region.
Finally you will need to start and enable
sudo systemctl start awslogsd
sudo systemctl enable awslogsd.service
And set it all up as a config in this file ebextensions/cloudwatch.config

Related

How do I get my EC2 Instance to connect to ECS Cluster?

I have an ECS cluster defined in AWS and an Auto Scaling Group that I use to add/remove instance to handle tasks as necessary. I have the ASG setup so that it is creating the EC2 instance at the appropriate time, but it won't connect to the ECS Cluster unless I manually go in and disable/enable the ECS service.
I am using the Amazon Linux 2 ami on the EC2 machines and everything is in the same region/account etc.
I have included my user data below.
#!/bin/bash
yum update -y
amazon-linux-extras disable docker
amazon-linux-extras install -y ecs
echo "ECS_CLUSTER={CLUSTERNAME}" >> /etc/ecs/ecs.config
systemctl enable --now ecs
As mentioned this installs the ECS service and sets the config file properly but the enable doesn't actually connect the machine, but running the same disable/enable commands on the machine once running connects without problem. What am I missing?
First thing, the correct syntax is
#!/usr/bin/env bash
echo "ECS_CLUSTER=CLUSTER_NAMe" >> /etc/ecs/ecs.config
Once you update the config better to restart the ECS agent.
#!/usr/bin/env bash
echo "ECS_CLUSTER=CLUSTER_NAME" >> /etc/ecs/ecs.config
sudo yum update -y ecs-init
#this will update ECS agent, better when using custom AMI
/usr/bin/docker pull amazon/amazon-ecs-agent:latest
#Restart docker and ECS agent
sudo service docker restart
sudo start ecs
I ended up solving this using the old adage, turn it off and on again.
e.g. I added shutdown -r 0 to the bottom of the user data script to restart the machine after it was "configured" and it connected right now.

TeamCity Agent - AWS CLI

I have deployed TeamCity server and Agent to AWS using JetBrains Stack Template (https://www.jetbrains.com/help/teamcity/running-teamcity-stack-in-aws.html)
All seems to be good, my server starts, agent is functional, I have created several builds, etc.
I came to a point, where I want to deploy my application to AWS environment using aws-cli commands.
I am struggling to enable/install aws-cli into agent. My build steps are erroring out with aws: command not found
Does anyone have any ideas?
My progress so far: I have connected to agent EC2 machine via ssh bastion ec2, and I am able to invoke aws --version as ec2-user, but the build agent cannot see aws.
Turns out, my TeamCity agent runs in AWS ECS via docker image https://hub.docker.com/r/jetbrains/teamcity-agent
What I ended up doing is creating my own docker image by using jetbrains one as a base.
I uploaded my docker image to AWS ECS Repository. Afterwards I created a new revision for original task definition. This new revision uses my image instead of original one, therefore I have aws-cli there.
I then added my AWS profile to EC2 host machine and added volume to docker container (via task definition) so that container would be able to access .aws/credentials file.
Dockerfile looks like this:
FROM jetbrains/teamcity-agent
RUN apt-get update && apt-get install -y python-pip
RUN pip install awscli --upgrade --user
ENV PATH="~/.local/bin:${PATH}"
I added the aws-cli in team city agent using remote desktop connection as I used window agent of team city. In the build steps I used Runner Type as command line and executed the aws commands.
for more information you can refer below link where I answered the question:
How to deploy to AWS Elastic Beanstalk on successful Teamcity build

How to create a health check for XRAY Daemon Task

I'm trying to implement XRAY for our AWS ECS spring boot application. To do so I'm creating a new task with a separate docker file just for the docker daemon as suggested by the AWS documentation and suggested when I asked another question on the Daemon setup.
However, when I try to deploy this to AWS, a health check endpoint is required for the load balancer is required to be able to determine that the service has been deployed successfully.
There is no health check functionality in the daemon itself. There's a thread on the AWS forums as well as an issue on the github repo related to this.
My initial idea is to create an application (probably spring-boot) that is able to determine if the daemon is running and expose a URL that the elb can hit to do a health check on the daemon. I can then deploy it along with the daemon.
Is there a better way to go about doing this? I worry about the need of creating a separate application just for creating a health check. There may be some hackiness required in order to run two entrypoint commands in the docker file as well.
Any ideas on a better way to accomplish this?
You don't need to use Load Balancer at all for X-Ray Docker Container Daemon since traffic is coming from cluster EC2 containers only. Healthcheck for X-Ray container can be done using AWS ECS Healthcheck itself.
Based on the forum answer, you can configure netstat on container healthcheck which will make sure if udp port is not opened by daemon container then ECS Agent will restart container.
Below is HealthCheck command you provide in ECS Task definition.
CMD-SHELL, netstat -aun | grep 2000 > /dev/null; if [ 0 != $? ]; then exit 1; fi;
Here is the setup and result.
Note--
If you are building X-Ray Docker image, please make sure you include netstat utility in Dockerfile otherwise health command will fail.
Example - if you are using Dockerfile gave in this documentation then you need to add net-tools package to your X-Ray container image.
Following is my updated Dockerfile which adds net-tools to image.
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y --force-yes --no-install-recommends apt-transport-https curl ca-certificates wget net-tools && apt-get clean && apt-get autoremove && rm -rf /var/lib/apt/lists/*
RUN wget https://s3.dualstack.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-3.x.deb
RUN dpkg -i aws-xray-daemon-3.x.deb
CMD ["/usr/bin/xray", "--bind=0.0.0.0:2000"]
EXPOSE 2000/udp

Get Advance Details provided while creating AWS EC2 instance

I was creating new AWS EC2 instance, in step 1 I selected AMI Linux Image, In Step 2 after some basic details, I provided following advance details
#!/bin/bash
yum install httpd -y
yum update -y
service httpd start
chkconfig httpd on
echo "<html><h1>Hello Test Page!</h1></html>" > /var/www/html/index.html
Somehow this script did not execute after EC2 instance was ready. I have following questions,
Can we get log of what exactly happen in executing this script?
Also from console is it possible to get what values were specified in Advance details while setup an EC2 instance.
Login into your EC2 instance and check /var/log/cloud-init-output.log for any errors.
To check the user-data specified, I don't think you can see it on the console. But you can verify it using http://169.254.169.254/latest/user-data/ after logging into EC2

How to auto create new hosts in Logentries for AWS EC2 autoscaling group

What's the best way to send logs from Auto scaling groups (of EC2) to Logentries.
I previously used the EC2 platform to create EC2 log monitoring for all of my EC2 instances created by an Autoscaling group. However according to Autoscaling rules, new instance will spin up if a current one is destroyed.
Now how do I create an automation for Logentries to create a new hosts and starting getting logs. I've read this https://logentries.com/doc/linux-agent-with-chef/#updating-le-agent I'm stuck at the override['le']['pull-server-side-config'] = false since I don't know anything about Chef (I just took the training from their site)
For an Autoscaling group, you need to get this baked into an AMI, or scripted to run on startup. You can get an EC2 instance to run commands on startup, after you've figured out which script to run.
The Logentries Linux Agent installation docs has setup instructions for an Amazon AMI (under Installation > Select your distro below > Amazon AMI).
Run the following commands one by one in your terminal:
You will need to provide your Logentries credentials to link the agent to your account.
sudo -s
tee /etc/yum.repos.d/logentries.repo <<EOF
[logentries]
name=Logentries repo
enabled=1
metadata_expire=1d
baseurl=http://rep.logentries.com/amazon\$releasever/\$basearch
gpgkey=http://rep.logentries.com/RPM-GPG-KEY-logentries
EOF
yum update
yum install logentries
le register
yum install logentries-daemon
I recommend trying that script once and seeing if it works properly for you, then you could include it in the user data for your Autoscaling launch configuration.