How to Transfer multiple files from Linux server to AWS - amazon-web-services

Can someone let me know how to transfer multiple files from Linux server to AWS?

If you are wanting to copy the data to Amazon S3, the easiest method is to use the AWS Command-Line Interface (CLI), either:
aws s3 cp --recursive or
aws s3 sync
The sync command automatically recurses sub-directories and is generally a better option because it can be re-run and only copies files modified or added since the previous execution. Thus, it can be used to continue the copy after a failure, or the next day when new files have been adeed.

Did you try using scp or sftp to transfer files. If your local machine is a linux one, you can use the console, otherwise putty in a windows machine.

Related

On-Premise file backup to aws

Use case:
I have one directory on-premise, I want to make a backup for it let's say at every midnight. And want to restore it if something goes wrong.
Doesn't seem a complicated task,but reading through the AWS documentation even this can be cumbersome and costly.Setting up Storage gateway locally seems unnecessarily complex for a simple task like this,setting up at EC2 costly also.
What I have done:
Reading through this + some other blog posts:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html
https://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.html
What I have found:
1.Setting up file gateway (locally or as an EC2 instance):
It just mount the files to an S3. And that's it.So my on-premise App will constantly write to this S3.The documentation doesn't mention anything about scheduled backup and recovery.
2.Setting up volume gateway:
Here I can make a scheduled synchronization/backup to the a S3 ,but using a whole volume for it would be a big overhead.
3.Standalone S3:
Just using a bare S3 and copy my backup there by AWS API/SDK with a manually made scheduled job.
Solutions:
Using point 1 from above, enable versioning and the versions of the files will serve as a recovery point.
Using point 3
I think I am looking for a mix of file-volume gateway: Working on file level and make an asynchronus scheduled snapshot for them.
How this should be handled? Isn't there a really easy way which will just send a backup of a directory to the AWS?
The easiest way to backup a directory to Amazon S3 would be:
Install the AWS Command-Line Interface (CLI)
Provide credentials via the aws configure command
When required run the aws s3 sync command
For example
aws s3 sync folder1 s3://bucketname/folder1/
This will copy any files from the source to the destination. It will only copy files that have been added or changed since a previous sync.
Documentation: sync — AWS CLI Command Reference
If you want to be more fancy and keep multiple backups, you could copy to a different target directory, or create a zip file first and upload the zip file, or even use a backup program like Cloudberry Backup that knows how to use S3 and can do traditional-style backups.

How can I move my media files stored in local machine to S3?

I have a Django application running on EC2. Currently, all my media files are stored in the instance. All the documents I uploaded to the models are in the instance too. Now I want to add S3 as my default storage. What I am worried about is that, how am I gonna move my current media files and to the S3 after the integration.
I am thinking of running a Python script one time. But I am looking for any builtin solution or maybe just looking for opinions.
Amazon CLI should do the job:
aws s3 cp path/to/file s3://your-bucket/
or if the whole directory then:
aws s3 cp path/to/dir/* s3://your-bucket/ --recursive
All options can be seen here : https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
The easiest method would be to use the AWS Command-Line Interface (CLI) aws s3 sync command. It can copy files to/from Amazon S3.
However, if there are complicated rules associated with where to move the files, then you certain use a Python script and the upload_file() command.

Automate batch file transfer daily from Linux EC2 Instance to a Windows EC2 Instance

I have 11 directories in Linux EC2 Instance where the external API adds data (.CSV files) to. I will need to schedule a job to copy ONLY those csv files from 11 directories into matching directories inside the Windows EC2 Instance daily. Both the Instances are on the same VPC but on different Security groups.
How can I accomplish the file transfer from Linux EC2 to a Windows EC2 in AWS?
"Pushing" content to a computer is always difficult due to security. And, in this situation, it is also cross-platform.
A simple solution would be:
Copy data from the source (Linux) computer to Amazon S3 on a regular schedule
Copy data from Amazon S3 to the destination (Windows) computer on a regular schedule
This can be done by triggering a script from cron / Scheduled Task which runs the AWS Command-Line Interface (CLI) aws s3 sync command. This is smart enough to copy files, but will only copy files that have been added/changed since the last use of the sync command.
See: aws s3 sync — AWS CLI Command Reference
You could copy the files hourly rather than daily, since there's no disadvantage.

Transfer 1TB folders & files to AWS S3 from vps web server

I need to transfer all our files (With folders structure) to AWS S3. I have researched lot about how this done.
Most of the places mentioned s3fs. But looks like this is bit old. And I have tried to install s3fs to my exsisting CentOS 6 web server. But its stuck on $ make command. (Yes there is Makefile.in)
And as per this answer AWS S3 Transfer Acceleration is the next better option. But still I have to write a PHP script (My application is PHP) to transfer all folders and files to S3. It is working same as how file save in S3 (API putObject), but faster. Please correct me if I am wrong.
Is there any other better solution (I prefer FTP) to transfer 1TB files with folders from CentOS 6 server to AWS S3? Is there any way to use FTP client in EC2 to transfer files from outside CentOS 6 to AWS S3?
Use the aws s3 sync command of the AWS Command-Line Interface (CLI).
This will preserve your directory structure and can be restarted in case of disconnection. Each execution will only copy new, changed or missing files.
Be aware that 1TB is a lot of data and can take significant time to copy.
An alternative is to use AWS Snowball, which is a device that AWS can send to you. It can hold 50TB or 80TB of data. Simply copy your data to the device, then ship it back to AWS and they will copy the data to Amazon S3.

aws cli copy command halted

I used Putty to get into my AWS instance and ran a cp command to copy files into my S3 instance.
aws cli cp local s3://server_folder --recursive
Partway through, my internet dropped out and the copy halted even though the AWS instances was still running properly. Is there a way to make sure the cp command keeps running even if I lose my connection?
You can alternatively use Minio Client aka mc,it is open source and is compatible with AWS S3. Minio client is available for Windows along with mac, Linux.
The mc mirror command will help you in copying local content to remote AWS S3 bucket, incase of network issue the upload fails mc session resume will start uploading from where connection was terminated.
mc supports these commands.
COMMANDS:
ls List files and folders.
mb Make a bucket or folder.
cat Display contents of a file.
pipe Write contents of stdin to one target. When no target is specified, it writes to stdout.
share Generate URL for sharing.
cp Copy one or more objects to a target.
mirror Mirror folders recursively from a single source to single destination.
diff Compute differences between two folders.
rm Remove file or bucket [WARNING: Use with care].
access Set public access permissions on bucket or prefix.
session Manage saved sessions of cp and mirror operations.
config Manage configuration file.
update Check for a new software update.
version Print version.
You can check docs.minio.io for more details.
Hope it helps.
Disclaimer: I work for Minio.