I'm running prometheus on ECS.
I'm mounting an EFS volume to my EC2 instance. When mounting the EFS I'm running chmod 777 on it. I'm attaching the EFS volume to the task-definition and then creating a mount point from the EFS volume the /prometheus container path.
When the container starts, it crashes with:
level=error ts=2020-08-10T16:04:39.961Z caller=query_logger.go:109 component=activeQueryTracker msg="Failed to create directory for logging active queries"
It's definitely permissions issue, since without mounting the volume it works fine. I also know that sometimes running chmod 777 won't suffice (for example running grafana the same way required to run chown 472:472 where 472 is grafana's user id), but I couldn't find what else to run.
Any ideas?
you can check if EFS file system policy has client root access enabled.
For troubleshooting, can check stopped tasks section.by clicking on any stopped task id you can see stopped reason.
Related
To Clarify my Autoscaling group removes all instances and their root EBS volumes during inactive hours, then once inside active hours recreates and installs all necessary base programs. However I have a smaller EBS volume that is persistent and holds code and data I do not want getting wiped out during down times. I am currently manually attaching via the console and mounting every time I am working inside active hours using the commands below.
sudo mkdir userVolume
sudo mount /dev/xvdf userVolume
How can I automatically attach and mount this volume to a folder? This is all for the sake of minimizing cost and uptime to when I can actually be working on it.
Use this code:
#!/bin/bash
OUTPUT=$(curl http://169.254.169.254/latest/meta-data/instance-id)
aws ec2 attach-volume --volume-id vol-xxxxxxxxxxxx --device /dev/xvdf --instance-id $OUTPUT --region ap-southeast-1
Set your volume ID and region.
Refer this link for further details: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-linux-spot-instance-attach-ebs-volume/
I'm trying to mount an EFS volume with NFS on macOS, but am having permissions trouble. I am running the following command to mount the volume:
sudo mount -t nfs -o vers=4 -o tcp -w <IP Address>:/ efs/
and am able to successfully mount the volume, but it mounts with root privileges, and I need to be able to grant access to the volume to the local user. I need the local user to be able to both read and write to the volume.
Trying to chown -R $(whoami) ./efs results in an Unknown error: 10039.
I can successfully chmod 666 the files inside of the mount (sometimes with odd behaviors), but I ultimately need to just grant the local user write access to the volume.
Am I missing an option in the mount command or does anyone know how to mount the efs volume and provide the local user permissions to it?
I have recently changed AMI on which my ECS EC2 instances are running from Amazon Linux to Amazon Linux 2 (in both cases I am using ECS optimized images). I am deploying my instances using cloudformation and having a real headache as those new instances sometimes are being run successfully and sometimes not (same stack, no updates, same code).
On the failed instances I see that there is an issue with ECS service itself after executing ecs-logs-collector.sh I see in ecs file log "warning: The Amazon ECS Container Agent is not running". Also directory "/var/log/ecs" doesn't even exist!.
I have correct IAM role attached to an instance.
Also as mentioned, it is the same code being run, and on 75% of attempts it fails with ECS service, I have no more ideas, where else to look for some issues/logs/errors.
AMI: ami-0650e7d86452db33b (eu-central-1)
Solved. If someone will fall into this issue adding this to my userdata helped:
cp /usr/lib/systemd/system/ecs.service /etc/systemd/system/ecs.service
sed -i '/After=cloud-final.service/d' /etc/systemd/system/ecs.service
systemctl daemon-reload
I am running a t2.micro ec2 instance on us-west-2a and instance's state is all green.
When I access my website it stops loading once in a while. Even if I reboot it, the website still doesn't load. When I stop an instance and then relaunch it, it shows 1/2 status checks failed.
ALARM TYPE: awsec2-i-20aaa52c-High-Network-Out
I also faced same type of issue.
EC2 instances were failing Instance Status Checks after a stop/start. I was able to take a look on my side at the System logs available to support and I could confirm that the system was having a kernel panic and was unable to boot from the root volume.
So I launched new EC2 temporary instance so we can attach the EBS root volumes of each EC2 instance . Here we modified the grub configuration file so it can load from a previous kernel.
The following commands:
1. Mount the EBS volume as a secondary volume into mnt folder: $ sudo mount /dev/xvdf1 /mnt
2. Backup the grub.cfg file: sudo cp /mnt/boot/grub2/grub.cfg grub.cfg_backup
3. Edit the grub.cfg file: sudo vim /mnt/boot/grub2/grub.cfg
4. Here we commented # all the lines for the first entry loading the new kernel.
Then you attached the original EBS volumes back to the original EC2 instances and these EC2 instances were able to successfully boot.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstances.html#FilesystemKernel
I have created a custom CentOS 6.5 image and registered it to AWS as EBS root device type. When I launch an instance, it works perfectly well, except that the storage capacity (instance storage to be included according to the instance type) is not added to the instance.
I made a try booting an instance using the official CentOS 6.5 AMI that is located in the AWS Marketplace, but I got the same result.
Does anyone know the reason, if it is a known issue, or whatever?
Thanks in advance.
First you have to make sure that the instance store is attached at launch time. From the AWS console it should look something like this:
Once you boot the instance you have to create a filesystem in the drive by running:
mkfs.ext4 /dev/sdb
Then you need to mount that drive somewhere in your root filesystem:
mkdir -p /mnt/myinstancestore
mount /dev/sdb /mnt/myinstancestore
You can run these commands to check that your drive is mounted:
df -h
mount
You can also add the mount entry to your /etc/fstab file so that it mounts permanently after every reboot:
/dev/sdb /mnt/myinstancestore ext4 defaults 1 2