Upload files to S3 during deploy - amazon-web-services

I want to create a bucket during the deployment process, but when I do this, a problem with assets appears, "must have values". So I decide to create a different stack to only upload files and other stack to deploy a EC2 instance. So, when I use this approach, the EC2.UserData didn't find the files on S3 to download them. I need this file to configure my instance. I could create the S3 manually before to deploy the EC2, but I want do automatize this process. How I could do this?

You need to configure S3 access at the machine where you wish to automate the process.
Use AWS CLI tools and run aws configure on your server and define the credentials.
OR
If it is an EC2 instance then create IAM role with S3 write
permissions and attach to this EC2.

You can do the following:
Create 2 separate stacks (we'll refer to them as s3Stack and ec2Stack)
Add ec2Stack.addDependency(s3Stack) where you create these stacks
In the s3 stack, create the bucket and upload the assets using aws-s3-deployment
Give permissions to the ec2 instance to get necessary files from the previously created bucket.
This will ensure you can deploy everything with just one command cdk deploy ec2Stack. It will check if the s3Stack needs to be created/updated first and only when those updates are done, your ec2Stack will be deployed.

Related

AWS - How to send S3 artifacts from codebuild to EC2 instance

I recently was able to successfully send my artifacts to an S3 bucket using Code Build, but now I want to send those exact artifacts to a specific place in my EC2 instance.
I've been reading the AWS docs non-stop, but I haven't been able to configure Code Deploy in a way that works. Can anyone guide me to a proper source that teaches how to use appspec files and how Code Deploy works?
Thanks,
CodeDeploy simply fetches your code from S3/GitHub to your EC2 Instances and deploy it using appspec.yml.
Place your appspec.yml file in the root of your code.
https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html
Create a Deployment group which will contain either your EC2 Instances (Use tags to find the EC2 Instances) or AutoScaling group.
Configure it to use the deployment strategy as per your requirement which is AllAtOnce,HalfAtOnce,OneAtATime and it's done.
( Make sure your EC2 Instances are running CodeDeploy agent )
https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent-operations-install.html
Create a deployment which will get your code from S3 bucket and deploy on EC2 Instances.

How can I access S3 Bucket from within ECS Task

I'm currently debugging an ECS Task which basically grabs a message from a SQS queue, downloads a file from S3, manipulates it and uploads it back to S3. This script works fine locally and if I run it in a docker container it also works fine locally. When I create a task in ECS with the docker image and let it run it doesn't seem to process the file. In order to find the problem I created a very small script which simply uploads a file to s3
aws s3 cp hello-world.txt s3://my-buckt-name/hello-world.txt
This again works fine locally, and it works fine in a docker container (locally). When I create a task for it, it simply won't work. The ECS Task has a role that has "Full S3 Access"... any ideas?
Could it be that I need a bucket policy on my s3 bucket? I thought it would be sufficient if I grant access to the AWS services that need it, but apparently it's not working... and using my admin account I can use the awscli to do create objects in my bucket...
EDIT
Ok, it seems that the problem is the region. I created another bucket in a different region (it was Frankfurt before and now Ireland) and now I can copy and paste to the bucket as I would expect. Funnily I can create buckets programmatically (even from within my ECS task) but I can't seem the create objects in the buckets that are located in Frankfurt.

AWS Windows EC2 Pull From S3 on Upload

I have a subset of Windows EC2 instances that I would like to continuously copy files to whenever files are uploaded to a specific S3 bucket. Files will be uploaded to this bucket anywhere between once a month to several times a month but will need to be copied to the instances within an hour of upload. EC2 instances will be continually added and removed from this subset of instances. I would like this functionality to be controlled by the EC2 instance so that whenever a new instance is created, it can be configured to pull from this bucket. Ideally, this would be an instantaneous upon upload (vs a cron job running periodically). I have researched AWS Lamba and S3-notifications, and I am unsure if these are the correct methods to use. What solution is best suited to fit this model of copying files?
If you don't need "real time" presence of the files, you might think to run s3 sync on each instance by a cron job (easy one) or s3-notification->with some lambda works to deliver EC2 Run Command.
If the instances are in an autoscaling group, you can use aws s3 copy in the user data section of your launch config to accomplish this.

How to download a file from s3 using an EC2 instance?

I have an AMI image in which will be used for autoscaling, every EC2 instance that initiated from the AMI image,suppose to download some files from a s3 bucket, (They are all in the same VPC) the s3 suppose to be private(Not open to public).
How does this can be done?
There are lots of ways. You could use the AWS CLI (S3 Command) or you could use the SDK for the language of your choice. You will also probably want to use IAM to establish the credentials for accessing the resources. The CLI is probably the quickest way to get up and running.

Autoscaling ec2 instance without loss of old data

Recently My Website shifted on Amazon.
I create EC2 Instance. Install lamp and Setup CodeIgnitor in /var/www/http folder.
the structure
Codeignitor folder I have folder name 'UPLOAD'.this folder is used for uploaded images and files.
I make AMI image from EC2 Instance.
I have setup Auto scaling of Ec2 instances.
When my old ec2 instance is failed then automatically new instance is created. But My all data from "UPLOAD" of folder on old ec2 instance has lost.
I want to separate "UPLOAD" folder in codeignitor from ec2 instance.
So whenever new instance is create it will get UPLOAD folder and its contents without loss.
I want to separate this upload folder. so when new instance is create then it will get this data.
how to do this.
Thanks for Advance.
Note . I have used MYSQL on Amazon RDS.
You can use a shared Elastic Block Storage mounted directory.
If you manually configure your stack using the AWS Console, go to the EC2 Service in the console, then go to Elastic Block Storage -> Volumes -> Create Volume. And in your launch configuration you can bind to this storage device.
If you are using the command line tool as-create-launch-config to create your launch config, you need the argument
--block-device-mapping "key1=value1,key2=value2..."
If you are using Cloudformation to provision your stack, refer to this template for guidance.
This assumes Codeignitor can be configured to state where its UPLOAD directory is.
As said by Mike, you can use EBS, but you can also use Amazon Simple Storage Service (S3) to store your images.
This way, whenever an instance starts, it can access all the previously uploaded images from S3. Of course, this means that you must change your code for the upload, to use the AWS API and not the filesystem to store your images to S3.