I am looking at using jenkins to deploy a war file to an EC2 instance. I have set up similar before. Creating an EC2 instance, a S3 Bucket and a Code Deploy application. The way that worked was that :
1)zip up load the war/jar into a S3 Bucket.
2) Use AWS steps createDeployment to deploy the zip file from the S3 Bucket to the EC2. This would also involve creating a appspec.yml and scripts to set up the environment.
But have been told there is another way. that does not need setting up a code deploy.
I have created an Ec2 instance, set up a docker container inside it, with all the environment settings.
And what I would like to do is load my zip file into the EC2. That I dont need a AWS codedeploy application.
is this correct, is there a AWS CLI command to simply load a zip file into the EC2 instance.
Thank you for any help.
You can copy from an s3 bucket
To copy files from a S3 bucket to EC2 instance,
Create an IAM role with S3 write access or admin access
Map the IAM role to an EC2 instance
Install AWS CLI in EC2 instance
Run the AWS s3 cp command to copy the files from S3 to EC2
To copy the files from S3 to EC2, Keep the source as the bucket URL and the destination to your local directory or filename
To copy the files from S3 to EC2
aws s3 cp s3://<S3BucketName> <Fully Qualified Local filename/Directory>
In the previous command, you can see the difference. Here the source is S3 Bucket URL and the destination is a local file name or directory name.
Related
I am trying to upload a file which I have on my Linux server onto my AWS S3 bucket. Can anyone please advise on how to do so as I only find documentations which is related to upload the files to EC2 instead.
I do have the .pem certificate present on my server directory.
I tried to run the following command but it doesn't solve the issue
scp -i My_PEM_FILE.pem "MY_FILE_TO_BE_UPLOADED.txt" MY_USER#S3-INSTANCE.IP.ADDRESS.0.compute.amazonaws.com
It is not possible to upload to Amazon S3 by using SSH.
The easiest way to upload from anywhere to an Amazon S3 bucket is to use the AWS Command-Line Interface (CLI):
aws s3 cp MY_FILE_TO_BE_UPLOADED.txt s3://my-bucket/
This will require an Access Key and a Secret Key to be stored via the aws configure command. You can obtain these keys from your IAM User in the IAM management console (Security Credentials tab).
See: aws s3 cp — AWS CLI Command Reference
If I sync an s3 bucket to an EC2 instance, when a new file is added to s3 does it automatically add it to my EC2 instance, or is it only the other way around (EC2 file --> add to s3 bucket).
I need to have any new files added to my s3 bucket to be added to my EC2 instance, so s3 bucket --> EC2.
No, a sync is a one-time operation. You would need to run it again each time files change wether your are copying them to or from s3.
Depending on your OS on the EC2 instances, there are ways to 'mount' an s3 bucket as a folder on an ec2 instance.
Googling 'mount s3 bucket to ec2' will get you started if that is what you want to do.
How to run aws cli to download s3 bucket data without storing aws credential in local machine?
Please Note that s3 bucket is not a public bucket.
Not sure what your goal is, but you can use environment variables which you are only exporting for the current session/aws_cli run.
To prevent in bash (asuming you are using linux) that the export is written to history, you can use a space infront of the command.
You can start an EC2 instance and give that instance a role that allows it to read from your S3 bucket.
Once started, connect to the EC2 instance using ssh and initiate your S3 transfer using aws s3 cp...ˋ or ˋaws s3 sync...
I'm new to cloud infrastructure for Deep Learning and trying to use AWS for deep learning first time and I don't know how to access my data from EC2 launched instance.
My data is stored is S3 bucket but I'm not able to find a way how to get it together and start training.
In that EC2 instance. login via ssh.
install aws cli if its not there
configure credentials are add permission for ec2 instance to use s3 bucket.
otherwise add aws secret and access key
get files to your local system
aws s3 cp s3://mybucket/test.txt test2.txt
Get files from local to s3
aws s3 cp test.txt s3://mybucket/test2.txt
https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html#examples
I need to prepare Docker image with embedded Jar file to push it into ECR. Jar file is storing in S3 bucket. How I can inject jar inside image without explicit storing AWS access keys into image?
Maybe I can use AWS CLI or exist other way?
Also it is not recommended to add public access to my s3 bucket and set access keys via env variable during execute docker run.
You can define an AWS IAM Role and attach it to EC2 Instances. So any instance that needs to run this docker build command, can do so as long as it has the IAM role attached to it. You can do so from the AWS Console. This solves the problem of you putting AWS credentials on the instance itself.
You will still need to install the aws cli in your Dockerfile. Once IAM Role is attached, you don't have to worry about credentials.
Recommended docs:
IAM Roles for Amazon EC2
Here's an official blog post tutorial on how to do this:
Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI
Just make sure you specify in the IAM Role which S3 Buckets you want these instances to have access to.