A local project directory with its files and sub directory which is used for web app development needs to move to the AWS cloud. And once there, changes in the local machine version often will need to sync with aws version to update it.
The local Mac machine has aws-shell installed. The app gets built from Dockerfile on EC2 thus the project directory will eventually need to be on the EC2.
Options:
1. compress locally to 100Mb and scp to EC2, unzip on EC2 and use docker?
2. compress locally and copy to S3, copy from S3 to EC2?
What commands is used to to pull this off?
Thanks
compress locally to 100Mb and scp to EC2, unzip on EC2 and use docker?
What commands is used to to pull this off?
scp
compress locally and copy to S3, copy from S3 to EC2?
What commands is used to to pull this off?
aws s3 cp or aws s3 sync
See the documentation here
Related
I have already created an instance in ec2 and connected to the ssh
how can I transfer my project from GitHub repository to AWS server? which type of AWS services(ec2 or s3) i can use to do so??
In the GitHub web UI, find the big green "Code" button
and click on Code --> Download ZIP.
Use scp to transfer it to your ec2 instance.
Or use the amazon web console to upload the .zip to an S3 bucket.
Or install CLI tools and use
$ aws s3 sync myproject.zip s3://mybucket/
I would say use the git clone command for your project and if your unable to create a key you can use git clone with the ssh option - which is the same but requires you to put password for every pull or push
Regarding the instance if your deploying a application you should use the EC2 instance where as S3 instance is more often used for storing objects such as image files or other type of files but to run actual code you should use EC2
The best way to do this is by using the git clone command.
Login to your ec-2 instance via ssh.
ssh -i "/path/to/pem/file.pem" remote-name#ec2-xx-xx-xxx-xxx.us-xxxx-x.compute.amazonaws.com
After logging in, run the following:
git clone <link to repo>
Please note that the link to repository is made available once you click on the Code button on your repo page.
The other way of doing this, i.e., downloading the repo and then uploading it to your ec-2 instance via scp is inefficient. The secure copy (scp) is only good if you want to copy a project that resides in your local machine. Otherewise, there seems no purpose of first downloading and then uploading the whole project.
Also, I would not recommend putting your code base on s3, as it is not made for this purpose. It is better if your project resides in ec-2. If you want that your environment persists when your stop and again start your instance, then use ebs instead.
I have a Django application running on EC2. Currently, all my media files are stored in the instance. All the documents I uploaded to the models are in the instance too. Now I want to add S3 as my default storage. What I am worried about is that, how am I gonna move my current media files and to the S3 after the integration.
I am thinking of running a Python script one time. But I am looking for any builtin solution or maybe just looking for opinions.
Amazon CLI should do the job:
aws s3 cp path/to/file s3://your-bucket/
or if the whole directory then:
aws s3 cp path/to/dir/* s3://your-bucket/ --recursive
All options can be seen here : https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
The easiest method would be to use the AWS Command-Line Interface (CLI) aws s3 sync command. It can copy files to/from Amazon S3.
However, if there are complicated rules associated with where to move the files, then you certain use a Python script and the upload_file() command.
AWS provides a config to limit the upload bandwidth when copying files to s3 from ec2 instances. This can be configured by below AWS config.
aws configure set default.s3.max_bandwidth
Once we set this config and run an AWS CLI command to cp files to s3 bandwidth is limited.
But when I run the s3_sync ansible module on the same ec2 instance that limitation is not getting applied. Any possible workaround to apply the limitation to ansible as well?
Not sure if this is possible because botocore may not support this.
Mostly is up to Amazon to fix their python API.
For example Docker module works fine by sharing confugration between cli and python-api.
Obviously that I assumed you did run this command locally as the same user because otherwise the aws config you made would clearly not be used.
I need to transfer all our files (With folders structure) to AWS S3. I have researched lot about how this done.
Most of the places mentioned s3fs. But looks like this is bit old. And I have tried to install s3fs to my exsisting CentOS 6 web server. But its stuck on $ make command. (Yes there is Makefile.in)
And as per this answer AWS S3 Transfer Acceleration is the next better option. But still I have to write a PHP script (My application is PHP) to transfer all folders and files to S3. It is working same as how file save in S3 (API putObject), but faster. Please correct me if I am wrong.
Is there any other better solution (I prefer FTP) to transfer 1TB files with folders from CentOS 6 server to AWS S3? Is there any way to use FTP client in EC2 to transfer files from outside CentOS 6 to AWS S3?
Use the aws s3 sync command of the AWS Command-Line Interface (CLI).
This will preserve your directory structure and can be restarted in case of disconnection. Each execution will only copy new, changed or missing files.
Be aware that 1TB is a lot of data and can take significant time to copy.
An alternative is to use AWS Snowball, which is a device that AWS can send to you. It can hold 50TB or 80TB of data. Simply copy your data to the device, then ship it back to AWS and they will copy the data to Amazon S3.
I'm running a python script, using Boto3 (first time using boto/3), on my local server which monitors S3 bucket for new files. When it detects new files in the bucket, it starts a stopped EC2 instance, which has software loaded onto it to process these said files, and then needs to somehow instruct S3/EC2 to copy the new files from S3 to EC2. How can I achieve that using Boto3 script which is running on my local server ?
Essentially, the script running locally is the orchestrator of the process and needs to start the instance when there are new files to process and have them processed on the EC2 instance and copy the processed files back to S3. I'm currently stuck at trying to figure how to get the files copied over to EC2 from S3 by the script running locally. I'd like to avoid having to download from S3 to local server and then upload to EC2.
Suggestions/ideas ?
You should consider using Lambda for any S3 event-based processing. Why launch and run servers when you don't have to?
If the name of the bucket and other params don't change, you can achieve it simply by having a script on your EC2 instance that would pull the latest content from the bucket and set this script to be triggered every time your EC2 starts up.
If the s3 command parameters do change and you must run it from your local machine with boto, you'll need to find a way to ssh into the EC2 instance using boto. Check this module: boto.manage.cmdshell and a similar question: Boto Execute shell command on ec2 instance