I have my app stored on AWS. How can i download/take down all those files to my local system from AWS ?
I know how to upload a file to aws using command line scp but don't know how to download from aws.
You can use scp for downloading too.
Upload : scp your-local-file yourId#yourAWShost:/directory/
Download : scp yourID#yourAWShost:/directory/filename /your/local/host/directory/
btw if you're using ".pem" key with it, you need to add option("-i") too.
For example(Download from yourAWShost)...
scp -i yourkey.pem yourID#yourAWShost:/directory/filename /your/local/host/directory/
Related
The documentation says download it, but when i try downloading, it downloads back to my local workstation. Is there a way to just transfer it within the cloud?
To download a file from GCS bucket to an instance:
SSH to instance (using gcloud ssh command or clicking on SSH button on the instances page in console.cloud.google.com).
Issue command `gsutil cp gs://$BUCKET/$file .
I tried to follow advice provided at https://stackoverflow.com/a/18136205/6608952 but was unsure how to share myAmazonKeypair path in a .pem file on the remote server.
scp -i yourAmazonKeypairPath.pem fileNameThatYouWantToTransfer.php ec2-user#ec2-00-000-000-15.us-west-2.compute.amazonaws.com:
The command completed after a few minutes with this display:
ssh: connect to host
myBucketEndpointName
port 22: Connection timed out
lost connection
I have a couple of very large files to transfer and would prefer not to have to download the files to my local computer and then re-upload them to the S3 bucket.
Any suggestions?
There is no direct way to upload files to S3 from a remote location. i.e a URL
So to achieve that, you have two options :
Download the file on your local machine and then upload it via AWS Console or AWS CLI.
Download the file in AWS EC2 Instance and upload to S3 by AWS CLI.
The first method is pretty simple, not much explanation needed.
But for the second method, you'll need to do :
Create an EC2 Instance in the same region as the S3 Bucket is. Or if you already have an Instance, then login/ssh to it.
Download the file from the source to the EC2 Instance. via wget or curl whichever is comfortable.
Install AWS CLI on the EC2 Instance.
Create IAM User and Grant him Permission for your S3 Bucket.
Configure your AWS CLI with your IAM Credentials.
Upload your file to S3 Bucket with AWS CLI S3 CP Utility.
Terminate the Instance, if you set up the instance only for this.
Do it with shell script easily. If you have a list of URLs in files.txt do it like it is described here:
#!/bin/bash
input="files.txt"
while IFS= read -r line do
name=$(basename "$line")
echo $name
wget $line
aws s3 mv $name <YOUR_S3_URI>
done < "$input"
Or for one file:
wget <FILE_URL> | aws s3 mv <FILE_NAME> <YOUR_S3_URI>
I want to deploy a x.war in my tomcat of AWS, but I don't manage upload the file.
I have written this in my console.
scp -i x.pem /home/user/folder/x.war ubuntu#52.16.43.22/folder
but ubuntu said to me, is not a directory. The directory /folder exist in my instance of AWS.
You forgot the : between the server and the local file path
scp -i x.pem /home/user/folder/x.war ubuntu#52.16.43.22:/folder
I am trying to download a file from amazon using
scp -r xxxxxxx#aws2.xxxxx.com:file.tar.gz /Users/myname/Desktop
It asks for a pass but I use a PEM to connect
Is this possible or am I stuck with FTP?
Assuming you are having an Aws Ec2 linux instance.
To connect using pem file.
scp -i yours.pem xxxxxxx#awsec2ip:/path/to/file
Here
yours.pem - your PEM key file
xxxxxxx - is the username you log in with
awsec2ip - is the IP or DNS alias of the instance
/path/to/file - is the location where the file is stored
This will copy the file into the current folder on the local machine.
Details on Download/Save files from EC2
Another option could be through SFTP.
You can find the details of how to SFTP using your pem file here
scp -i /path/to/pem/file xxxxxxx#aws2.xxxxx.com:/path/to/file/file.tar.gz /Users/myname/Desktop
This command should work for tar.gz file. But if you want to copy a directory recursively you will have to use additional switch -r.
How to use scp command to upload file to aws server
I have .pem file in /Downloads in local machine
I am trying to copy file to /images folder in AWS server
What command can i use ?
Thanks,
You can use plain scp:
scp -i ~/Downloads/file.pem local_image_file user#ec2_elastic_ip:/home/user/images/
You need to put an Elastic IP to the EC2 instance, open port 22 to your local machine IP in the EC2 instance security group, and use the right user (it can be ec2-user, admin or ubuntu (look at the AMI documentation)).
Diego's answer works.. However, if you're unaware of your elastic IP, then you can simply scp using following command (check the order of arguments)
scp -i path-to-your-identifier.pem file-to-be-copied ubuntu#public-IP:/required-path
just for reference, here ubuntu is your AWS user and public-IP is somewhat like 54.2xx.xxx.xxx e.g. 54.200.100.100 or such
(If order is messed up: filename before identifier, then you'll get a Permission denied (publickey).lost connection error)
Also, keep in mind the permissions of .pem file.. Should be 400 or 600. Not public to all.
Hope it helps!
there are number of ways to achieve what you want
use s3cmd http://s3tools.org/s3cmd
or use cyberduck http://cyberduck.ch/
or write a tool using amazon Java API
You can try kitten utility which is a wrapper around boto3. You can easily upload/download files and run commands on EC2 server or on multiple servers at once for that matter.
kitten put -i ~/.ssh/key.pem cat.jpg /tmp [SERVER NAME][SERVER IP]
Where server name is e.g ubuntu or ec2-user etc.
This will upload cat.jpg file to /tmp directory of server
This is the correct way uploading from local to remote.
scp -i "zeus_aws.pem" ~/Downloads/retail_market_db-07-01-2021.sql ubuntu#ec2-3-108-200-27.us-east-2.compute.amazonaws.com:/var/www/
Could be a better approach
Another alternative way to scp is rsync.
Some of the benefits of rsync
faster - uploads only the deltas
enable compression
you can exclude some files from the upload
resume
limit the transfer bandwidth
The rsync cmd
rsync -ravze "ssh -i /home/your-user/your-key.pem " --exclude '.env' --exclude '.git/' /var/www/your-folder-to-upload/* ubuntu#xx.xxx.xxx.xxx:/var/www/your-remote-folder
Now, in case you find this syntax a little bit verbose you can use aws-upload which does all the above but you just tabbing.