Can I configure Linux swap space on AWS Elastic Beanstalk? - amazon-web-services

Can I configure Linux swap space for an AWS Elastic Beanstalk environment?
I don't see an option for it in the console. From looking at /proc/meminfo on my running instances in my environment MemAvailable is looking quite low despite there being quite high Inactive values. I suspect there are a few dormant background processes that it would do no harm to page out, and would free up a non-trivial portion of the limited physical memory on the t2.nano I'm using.

I figured out how to do this using the .ebextensions config folder in my Tomcat web app.
Add a file .ebextensions/swap.config:
container_commands:
add-swap-space:
command: "/bin/bash .ebextensions/scripts/add-swap-space.sh"
ignoreErrors: true
Add a file .ebextensions/scripts/add-swap-space.sh:
#!/bin/bash
set -o xtrace
set -e
if grep -E 'SwapTotal:\s+0+\s+kB' /proc/meminfo; then
echo "Positively identified no swap space, creating some."
dd if=/dev/zero of=/var/swapfile bs=1M count=512
/sbin/mkswap /var/swapfile
chmod 000 /var/swapfile
/sbin/swapon /var/swapfile
else
echo "Did not confirm zero swap space, doing nothing."
fi
More docs: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
After a while running this allowed 150MiB to be swapped out on my t2.nano instance which runs the Elastic Beanstalk Java Tomcat platform with default heap options. From what I can see there is no ongoing paging while the application runs. It looks like some dormant data has been pushed to swap and the page cache is significantly larger (up from 30MiB to 180MiB).

Related

Elastic Beanstalk stalling after running out of memory

I have an Elastic Beanstalk instance which then runs out of memory starts giving 500 errors and the health goes to degraded which is expected behavior. Now when the memory is released server health is back to normal the HTTPS requests are stalling. There is no CPU or memory activity on the server but server still fails to load 2 out of 5 requests. The issue is resolved once i restart the Elastic Beanstalk instance.
I checked the error logs but there are no records seems as if the requests are not hitting the server they are being blocked at the load balancer. Is any one facing this issue? I tried running few different applications on the Elastic Beanstalk instance but results are the same.
Any suggestions or pointing me to docs will be appreciated as i didn't find anything in their docs about this.
Thanks
It is possible that some important process was OOM killed, and has not started back up properly. You can look at this question for help on that: Finding which process was killed by Linux OOM killer
It may be possible to simply reboot the server, and everything may start back up properly again.
The best solution is to not run out of memory in the first place. If you are unable to upgrade the server, perhaps due to cost reasons, then consider adding a swap file.
I have an Elastic Beanstalk application in which I add a swap file using an ebextension.
# .ebextensions/01-swap.config
commands:
"01-swap":
command: |
dd if=/dev/zero of=/var/swapfile bs=1M count=512
chmod 600 /var/swapfile
mkswap /var/swapfile
swapon /var/swapfile
echo "/var/swapfile none swap sw 0 0" >> /etc/fstab
test: test ! -f /var/swapfile
https://github.com/stefansundin/rssbox/blob/1e40fe60f888ad0143e5c4fb83c1471986032963/.ebextensions/01-swap.config

How to setup AWS CloudWatch's agent at Ubuntu to get (correct) custom metrics like cpu, memory and disk usage %

I'm running an AWS EC2 m5.large (a none burstable instance). I have setup one of AWS CloudWatch's default metrics (CPU %) + some custom metrics (memory + disk usage) in my dashboard.
But when I compare the numbers CloudWatch report to me they are pretty far from then actually usage of the Ubuntu 20.04 server when I log in to it...
Actual usage:
CPU: ~ 35 %
Memory: ~ 33 %
CloudWatch report:
CPU ~ 10 %
Memory: ~ 50-55
https://www.screencast.com/t/o1nAnOFjVZW
I have followed AWS own instructions to add the metrics for memory and disk usage (Because CloudWatch doesn't out of the box have access to O/S level stuff): https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html
When numbers are so far from each other - then it would be impossible to setup useful alarms and notifications. I can't believe that is what AWS wants to provide to the people who chose to followed their original instructions?
The only thing with match exactly is the disk usage %.
HOW TO INSTALL AWS AGENT AT UBUNTU 20.04 (NEWER WAY INSTEAD OF THE OLD SCRIPT: "CloudWatchMonitoringScripts")
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/download-cloudwatch-agent-commandline.html
1. sudo wget https://s3.amazonaws.com/amazoncloudwatch-agent/debian/amd64/latest/amazon-cloudwatch-agent.deb
2. sudo dpkg -i -E ./amazon-cloudwatch-agent.deb
3. sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
4. Go through all the steps in the wizard (The result is saved here: /opt/aws/amazon-cloudwatch-agent/bin/config.json)
Hint: I answered:
- Default to most questions and otherwise:
- NO --> Do you want to store the config in the SSM parameter store? (Because when I answered YES it failed later on because of some permission-issue and I didn't know how to make it happy and I don't think I need SSM in regards to this)
- YES --> Do you want to turn on StatsD daemon?
- YES --> Do you want to monitor metrics from CollectD?
- NO --> Do you have any existing CloudWatch Log Agent?
Now to prevent this error: Error parsing /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.toml, open /usr/share/collectd/types.db: no such file or directory
https://github.com/awsdocs/amazon-cloudwatch-user-guide/issues/1
5. sudo mkdir -p /usr/share/collectd/
6. sudo touch /usr/share/collectd/types.db
7. sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json -s
8. /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -m ec2 -a status
{
"status": "running",
"starttime": "2020-06-07T10:04:41+00:00",
"version": "1.245315.0"
}
https://www.screencast.com/t/42VWgoS88Z (Create IAM role, add policies and make it the server default role).
https://www.screencast.com/t/fAUUHCPe (CloudWatch new custom metrics)
https://www.screencast.com/t/8J0Saw0co (data match OK now)
https://www.screencast.com/t/x0PxOa799 (data match OK now)
I realized - that the second I login to the machine the CPU % usage goes up from 10 % to 30% and stays there (of course some increase was to be expected - but not that much in my opinion) which in my case explains the big difference earlier...I honestly don't now if this way in more precise than the older script - but this should be the right way to do it in year 2020 :-) And you get access to 179 custom metrics when selecting "Advanced" during the wizard (even though only few would be valuable to most people)

Limit a Docker container's disk IO - AWS EBS/EC2 Instance

When I run a new install of WordPress or a simple build command for some of my web apps in Jenkins the server grinds to a halt. In Netdata it appears the culprit is high "iowait".
I know that I can increase the IOPS on the EBS volume but I'd rather just wait a longer time for the process to finish. Is there a way to limit IOPS on a docker container (in this case; my Jenkins container)?
Try --device-read-iops and --device-write-iops option of docker run command.
The command should be something like this
docker run -itd --device-read-iops /dev/sda:100 --device-write-iops /dev/sda:100 image-name
NOTE: /dev/sda is the device name and 100 is number of iops per second
You can also limit io in terms of bytes using
--device-read-bps and --device-write-bps option.
Check this documentation for more info.
https://docs.docker.com/engine/reference/run/

Elastic BeanStalk EC2 instance's log uses up whole disk space

I have an Elastic BeanStalk environment where I run my application on 1 EC2 instance. I've added load balancer, when I configured the environment initially, but since then I set it only use 1 instance.
Application run within container apparently produces quite a lot of logs - after several days they use up whole disk space and then application crash. Health check drops to severe.
I see that terminating instance manually helps - environment removes old instance and creates a new one that works (until it fills up the whole disk again).
What are my options? A script that regularly cleans up logs? Some log rotation? Trigger that reboots instance when disk is nearly full?
I do not write anything to file myself - my application only log to std out and std err, so writing to file is done by EC2/EBS wrapper. (I deploy the application as a ZIP containing a JAR, a bash script and Procfile if that is relevant).
By default EB will rotate some of the logs produced by the Docker containers, but not all of them. After contacting support on this issue I received the following helpful config file, to be placed in the source path .ebextensions/liblogrotate.config:
files:
"/etc/logrotate.elasticbeanstalk.hourly/logrotate.elasticbeanstalk.containers.conf":
mode: "00644"
owner: "root"
group: "root"
content: |
/var/lib/docker/containers/*/*.log {
size 10M
rotate 5
missingok
compress
notifempty
copytruncate
dateext
dateformat %s
olddir /var/lib/docker/containers/rotated
}
"/etc/cron.hourly/cron.logrotate.elasticbeanstalk.containers.conf":
mode: "00755"
owner: "root"
group: "root"
content: |
#!/bin/sh
test -x /usr/sbin/logrotate || exit 0
/usr/sbin/logrotate /etc/logrotate.elasticbeanstalk.hourly/logrotate.elasticbeanstalk.containers.conf
container_commands:
create_rotated_dir:
command: mkdir -p /var/lib/docker/containers/rotated
test: test ! -d /var/lib/docker/containers/rotated
99_cleanup:
command: rm /etc/cron.hourly/*.bak /etc/logrotate.elasticbeanstalk.hourly/*.bak
ignoreErrors: true
What this does is install an additional log rotation configuration and cron task for the /var/lib/docker/containers/*/*.log files which are the ones not automatically rotated on EB.
Eventually, however, the rotated logs themselves will fill up the disk if the host lives long enough. For this, you can add shred in the list of logrotation options (along side compress notifempty etc).
(However, I'm not sure if the container logs that are already configured for rotation are set to be shredded, probably not - so those may accumulate too and require modification of the default EB log rotation config. Not sure how to do that yet. But the above solution in most cases would be sufficient since hosts typically do not live that long. The volume of logging and lifetime of your containers may force you to go even further.)
Logrotation is the way forward. You can create a configuration file in `/etc/logrotate.d/' where you state your options in order to avoid having large log files.
You can read more about the configurations here https://linuxconfig.org/setting-up-logrotate-on-redhat-linux
A sample configuration file would look something like this:
/var/log/your-large-log.log {
missingok
notifempty
compress
size 20k
daily
create 0600 root root
}
You can also test the new configuration file from the cli by running the follow:
logrotate -d [your_config_file]
This will test if the log rotation will be successful or not but only in debugging mode, therefore the log file will not be actually rotated.

One docker container on AWS - resource usage?

If I setup just one Docker container on my AWS - and use only default configuration - would this docker container use the whole memory and all the processors?
Or do I need to configure it?
Memory
There is no memory limit for the container be default, it can use as much as it can.
You can configure the memory usage as below using "-m" flag of docker run command
-m, --memory="" Memory limit (format: <number><optional unit>, where unit = b, k, m or g)
$docker run -t -i -m 500M centos6_my /bin/bash
Processors
Containers can by default run on any of the available CPUs, CPU usage can be configured using below "-c" and "--cpuset" flag of docker run command
-c, --cpu-shares=0 CPU shares (relative weight)
--cpuset="" CPUs in which to allow execution (0-3, 0,1)
please read Docker documentation for more details : link