How can I access S3 Bucket from within ECS Task - amazon-web-services

I'm currently debugging an ECS Task which basically grabs a message from a SQS queue, downloads a file from S3, manipulates it and uploads it back to S3. This script works fine locally and if I run it in a docker container it also works fine locally. When I create a task in ECS with the docker image and let it run it doesn't seem to process the file. In order to find the problem I created a very small script which simply uploads a file to s3
aws s3 cp hello-world.txt s3://my-buckt-name/hello-world.txt
This again works fine locally, and it works fine in a docker container (locally). When I create a task for it, it simply won't work. The ECS Task has a role that has "Full S3 Access"... any ideas?
Could it be that I need a bucket policy on my s3 bucket? I thought it would be sufficient if I grant access to the AWS services that need it, but apparently it's not working... and using my admin account I can use the awscli to do create objects in my bucket...
EDIT
Ok, it seems that the problem is the region. I created another bucket in a different region (it was Frankfurt before and now Ireland) and now I can copy and paste to the bucket as I would expect. Funnily I can create buckets programmatically (even from within my ECS task) but I can't seem the create objects in the buckets that are located in Frankfurt.

Related

Upload files to S3 during deploy

I want to create a bucket during the deployment process, but when I do this, a problem with assets appears, "must have values". So I decide to create a different stack to only upload files and other stack to deploy a EC2 instance. So, when I use this approach, the EC2.UserData didn't find the files on S3 to download them. I need this file to configure my instance. I could create the S3 manually before to deploy the EC2, but I want do automatize this process. How I could do this?
You need to configure S3 access at the machine where you wish to automate the process.
Use AWS CLI tools and run aws configure on your server and define the credentials.
OR
If it is an EC2 instance then create IAM role with S3 write
permissions and attach to this EC2.
You can do the following:
Create 2 separate stacks (we'll refer to them as s3Stack and ec2Stack)
Add ec2Stack.addDependency(s3Stack) where you create these stacks
In the s3 stack, create the bucket and upload the assets using aws-s3-deployment
Give permissions to the ec2 instance to get necessary files from the previously created bucket.
This will ensure you can deploy everything with just one command cdk deploy ec2Stack. It will check if the s3Stack needs to be created/updated first and only when those updates are done, your ec2Stack will be deployed.

Container is not able to call S3 in Fargate

I'm not able to synchronize a log-folder to s3 inside a container.
I'm trying to get the following setup:
Docker Container with installed awscli
there are logfiles and other files generated inside the container
There is a cronjob, which calls the "aws s3 sync" command through a shell-script.
The synchronisation is not working properly and I'm not sure why not.
I tried the following, which worked just fine:
provided access key/secret access key inside the docker container
this worked locally, with plain ECS and with fargate
but it's not recommended to use the access keys
plain ECS without any keys (just the IAM role)
this worked too
I played a little with the configuration and read through the documentation.
The only hints I got are:
Has it something to do with the network mode "awsvpc"? (which fargate has to use)
Has it something to do with the "AWS_CONTAINER_CREDENTIALS_RELATIVE_URI" path variable?
I found a few hits there on the web, but I'm not sure if it's set or not. I'm not able to look inside the container in fargate.
ECS Task Definition has two parameters related to defining IAM Role.
executionRoleArn - Provides access to the task or container to start running by performing needed actions such as pulling images from ECR, writing logs to Cloudwatch.
taskRoleArn - Allows the Task to execute AWS API calls to interact with AWS resources such as S3, etc...
In my case i had a shell script which i used to call using entrypoint in the task definition. I had correctly set the Task Role with access to S3 however it did not work. So using the information provided here https://forums.aws.amazon.com/thread.jspa?threadID=273767#898645
i added the first line in my shell script as
export AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
Still it did not work. Then i upgraded the AWS cli on the docker container to version 2 and it worked. So for me the real problem was that the docker image had an old CLI version.

AWS Windows EC2 Pull From S3 on Upload

I have a subset of Windows EC2 instances that I would like to continuously copy files to whenever files are uploaded to a specific S3 bucket. Files will be uploaded to this bucket anywhere between once a month to several times a month but will need to be copied to the instances within an hour of upload. EC2 instances will be continually added and removed from this subset of instances. I would like this functionality to be controlled by the EC2 instance so that whenever a new instance is created, it can be configured to pull from this bucket. Ideally, this would be an instantaneous upon upload (vs a cron job running periodically). I have researched AWS Lamba and S3-notifications, and I am unsure if these are the correct methods to use. What solution is best suited to fit this model of copying files?
If you don't need "real time" presence of the files, you might think to run s3 sync on each instance by a cron job (easy one) or s3-notification->with some lambda works to deliver EC2 Run Command.
If the instances are in an autoscaling group, you can use aws s3 copy in the user data section of your launch config to accomplish this.

Integrating AWS EC2, RDS and... S3?

I'm following this tutorial and 100% works like a charm :
http://docs.aws.amazon.com/gettingstarted/latest/wah-linux/awsgsg-wah-linux.pdf
but, in that tutorial, it use Amazon EC2 and RDS only. I was wondering what if my servers scaled up into multiple EC2 instances then I need to update my PHP files.
do I have to distribute it manually across those instances? because, as far as I know, those instances are not synced each other.
so, I decided to use S3 as replacement of my /var/www so the PHP files is now centralised in one place.
so, whenever those EC2 scaled up, the files remains in one place and I don't need to upload to multiple EC2.
is this the best practice to have centralised file server (S3) for /var/www ? because currently I still having permission issue when it's mounted using s3fs.
thank you.
You have to put your /var/www/ in S3 and when your instances scaled up have to make 'aws s3 sync' from your bucket, you can do that in the userdata. Also you have to select a 'master' instance where you make changes, a sync script upload changes to S3 and with rsync it copy changes to your alive FE. This is because if you have 3 FE that downloaded /var/www/ from S3 and you want to make a new change you would have to make a s3 sync in all your instances.
You can manage changes in your 'master' instance with inotify. Inotify can detect a change in /var/www/ and exec two commands, one could be aws s3 sync and then a rsync to the rest of your instances. You can get the list of your instances from the ELB through the AWS API.
The last thing is check the instance terminate protection in your 'master' instance.
Your architecture should look like here http://www.markomedia.com.au/scaling-wordpress-in-amazon-cloud/
Good look!!

Got image data in S3 bucket

I have imported a vhd file from local network to the EC2 and have the data in the S3 bucket. It ran properly. I accidentally terminated the EC2 image that was created. I still have the data in the S3 bucket in parts. Can I use that data, or do I have to re-upload the image? It's 40gb and would take over one work day to push to the cloud.
Yes, if you do not specifically remove your file from your S3 bucket you should be able to use it again from any future EC2 instance. For example, you could use the aws-cli to "copy" the file from the S3 bucket to any number of EC2 instances.
If you had used the aws mv or aws rm command, then I would expect the file to be gone.
The bottom line is, if the file is still in the bucket, then you can still use it provided you have the permissions set correctly.
I re started the conversion task and re-uploaded the VM.