error while trying to create a file in EFS from ECS - amazon-web-services

I'm having a problem this error when i try to create a file in EFS from ECS:
ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr:
b'mount.nfs4: mounting fs-xxxxxxxxxxxxxxxxxx.efs.eu-west-1.amazonaws.com:/opt/data/ failed,
reason given by server: No such file or directory' : unsuccessful EFS utils command execution; code: 32
I gave it permissions for port 2049 but it still gives me the error

Related

Elastic Beanstalk Docker deploy fails with "no space left on device"

I am following a tutorial to deploy a Flask application with Docker to AWS Elastic Beanstalk (EB). I created an AWS Elastic Container Registry (ECR) and ran some commands which successfully pushed the Docker image to the ECR:
docker build -t app-backend
docker tag app-backend:latest [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest
docker push [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest
Then I tried to deploy to EB:
eb init (selecting a Docker EB application I created on the AWS GUI)
eb deploy
On "eb init" I get the error "Cannot setup CodeCommit because there is no Source Control setup, continuing with initialization", but I assume this can be ignored as it otherwise looked fine. On "eb deploy" though, the deployment fails. In "eb-engine.log" (found in the AWS GUI), I see error messages like:
[ERROR] An error occurred during execution of command [app-deploy] - [Docker Specific Build Application]. Stop running the command. Error: failed to pull docker image: Command /bin/sh -c docker pull [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest failed with error exit status 1. Stderr:failed to register layer: Error processing tar file(exit status 1): write /root/.cache/pip/http/5/e/7/3/b/[long number]: no space left on device
When I manually run the pull command the error references (locally, not from the EB instance), the command seems to respond as expected:
docker pull [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest
What could be causing this deployment failure?
My Dockerrun.aws.json file looks like this:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "[URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend",
"Update": "true"
},
"Ports": [
{
"ContainerPort": 5000,
"HostPort": 5000
}
]
}
I solved this by following how to prevent error "no space left on device" when deploying multi container docker application on AWS beanstalk?.
Basically you find your Elastic Beanstalk instance in the EC2 AWS GUI, you modify the volumes to add space to the EB instance. Then you follow the link in that Stack Overflow post to repartition your EB instance by SSHing into it with eb ssh and then using commands like df -H and lsblk to see how much space in in each partition. And use commands like:
sudo growpart /dev/xvda 1
sudo xfs_growfs -d /
to repartition the hard drive as to use all the new space you added in the AWS EC2 GUI. You can check with df -H and lsblk to see if the repartitioning gave you more space.
Then the eb deploy command should work. If SSH isn't setup yet, you may have to do eb ssh --setup first.

Terraform issue in gitlab

I have an application whose build is configured in gitlab and makes use of terraform, and software is finally deployed in AWS.
I see following error during deployment:
null_resource.server_canary_bouncer (local-exec): Executing: ["/bin/sh" "-c" "./bouncer canary -a 'my-asg':$(aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name 'my-asg' --query 'AutoScalingGroups[0].DesiredCapacity')"]
null_resource.server_canary_bouncer (local-exec): /bin/sh: ./bouncer: No such file or directory
Error: Error running command './bouncer canary -a 'my-asg':$(aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name 'my-asg' --query 'AutoScalingGroups[0].DesiredCapacity')': exit status 127. Output: /bin/sh: ./bouncer: No such file or directory
[terragrunt] 2020/11/12 12:16:31 Hit multiple errors:
exit status 1
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
I don't have much knowledge of terraform and hence don't really understand what to make out of this log.
Any idea how this can be solved?
Output: /bin/sh: ./bouncer: No such file or directory you are trying to run a file/script/command and it does not exist in the dir you are running terraform.

Prometheus Docker on AWS with EFS - No write access

I'm running prometheus on ECS.
I'm mounting an EFS volume to my EC2 instance. When mounting the EFS I'm running chmod 777 on it. I'm attaching the EFS volume to the task-definition and then creating a mount point from the EFS volume the /prometheus container path.
When the container starts, it crashes with:
level=error ts=2020-08-10T16:04:39.961Z caller=query_logger.go:109 component=activeQueryTracker msg="Failed to create directory for logging active queries"
It's definitely permissions issue, since without mounting the volume it works fine. I also know that sometimes running chmod 777 won't suffice (for example running grafana the same way required to run chown 472:472 where 472 is grafana's user id), but I couldn't find what else to run.
Any ideas?
you can check if EFS file system policy has client root access enabled.
For troubleshooting, can check stopped tasks section.by clicking on any stopped task id you can see stopped reason.

AWS EFS filsystem mounting issue via anislbe-playbook

we installed efs utility and configured the EFS filesystem with EFS Mount points with in the VPC.
Added the entry in /etc/fstab for permanent mount like below.
echo "mount fs-xxxxxxx /mnt/efs efs tls,_netdev 0 0" >> /etc/fstab
after this when i manually run the mount -a -t efs defaults - it is working fine file system got mounted successfully without any issue.
But when i try to invoke the same thing from ansible mount module like below
- name: Mount up efs
mount:
path: /mnt/efs
src: fs-xxxxxxxx
fstype: efs
opts: tls
state: mounted
become: true
become_method: pbrun
become_user: root
Note: Ansible is running as root privilaged user on the target host.
Expected Result:
EFS filesystem should get mounted without any issue.
Actual Result:
We are getting error in ansible saying like
Error:
only root can run mount.efs
when i start debugging the issue i see the entry in init.py for efs
https://github.com/aws/efs-utils/blob/555154b79572cd2a9f63782cac4c1062eb9b1ebd/src/mount_efs/init.py
we are validating the user with getpass python module but some how even i am using the become in the ansible it is not help me to get ride of this error.
Could you please anyone help me to resolve tis issue
Either the fstype is nfs or you may need to install the EFS Mount Helper.

Using command line to create AWS Elastic Beanstalk fail

I tried to setup EB for worker tier by using the following command
eb create -t worker
But I receive the following error
2015-11-04 16:44:01 UTC+0800 ERROR Stack named 'awseb-e-wh4epksrzi-stack' aborted operation. Current state: 'CREATE_FAILED' Reason: The following resource(s) failed to create: [AWSEBWorkerCronLeaderRegistry, AWSEBSecurityGroup].
2015-11-04 16:43:58 UTC+0800 ERROR Creating security group named: sg-7ba1f41e failed Reason: Resource creation cancelled
Is there something specific to run the command line ?
I found the eb command line buggy. try to use the web console. much more reliable.