Transferring Files between two EC2 Instances in the same region - amazon-web-services

I have 2 EC2 instances running Ubuntu 14.04 and I need to figure out how to transfer files from one to another. I read the FAQs from Amazon and it says that I can do this without incurring any additional costs if I use the private IP but I am not sure how to transfer the files using that.
Right now I use the scp protocol to do this -
scp -i ~/Path-To-Key-File/AAA.gem /path/file ec2-user#<Elastic IP>:/path/file
I tried replacing the elastic IP with private IP but it doesn't work. Am I doing something wrong here?

Actually, I figured it out ... I just needed to replace the Elastic IP with the private IP and configure the security groups properly to allow instances to communicate!
Transferring from Machine A to Machine B
I am running this code on machine A
scp -i ~/Path-To-Key-File/AAA.pem /path/file ec2-user#<Private IP of Machine B>:/path/file
For security groups, I had to allow SSH protocol over the private IP (from Machine B)!!

Assuming both of your instances are EC2 linux instances.
suppose you want to transfer file from the second instance(ec2-2) to first instance(ec2-1), the command should be run in ec2-1 is:
scp -i /Path-To-Key-File-for-ec2-2/key.pem ec2-user#Elastic-IP-of-ec2-2:/path/filename your/local-path-on-ec2-1/filename
A corresponding discussion you can find here
Hope this help!!

This question is asked about authentication with the .pem file. But accessing without auth could be helpful in some cases. Here, you will authorize another machine instead.
Say, you like to ssh or scp from machine-1 to machine-2.
In machine-1.
Check if there is a public key file (id_rsa.pub) in USER_HOME/.ssh/. If not, generate it with ssh-keygen -t rsa command.
In machine-2
Uncomment PubkeyAuthentication yes in /etc/ssh/sshd_config.
Open file USER_HOME/.ssh/authorized_keys and append contents of id_rsa.pub file from the machine-1.
Now you can copy it with scp as following:
scp username_machine1#ip_machine1:/file/to/copy /destination/path
You are done. Enjoy!!!
For detailed information please check here.

scp -i /home/centos/b1.pem centos#ip:/etc/httpd/conf/httpd.conf httpd.conf.j2

Copy Data from local to ec2 and one ec2 to another(if you are the inside source ec2)
scp -ri <key file path> <copy data file location> <Public DNS (IPv4)>:~/
Example:-
scp -ri practical.pem serverdata1.tar
ubuntu#ec2-xx-xxx-xxx-xxx.ap-southeast-1.compute.amazonaws.com:~/

Related

How to copy file from EC2 to local machine?

How to copy a file or folder from EC2 instance? I want to download a file from my server but I don't have idea how to do it.
P.S. I know how to copy in EC2.
You can use scp to securely copy a file from your EC2 instance to your local machine. You will need three things:
Your ec2key.pem key -- You created this when you created the EC2 instance
Your EC2 username and IP -- You can find this in the EC2 Console ('Connect to Instance' button)
Path to your file
On your local machine, open up your command line, and type:
scp -i ec2key.pem username#xx.xxx.xx.xxx:/path/to/file .
Note that the period at the end signifies that the file is to be saved 'here'.

GCP VMs + ssh/config file

guys.
GCP offers multiple ways of ssh-ing in gcloud, cloud shell, and local machine cloud SDK.
While all these options are great and I have been using them, I normally prefer using .ssh/config to shorten the process of logging in to machines.
For an example, for EC2, you just add:
Host $name
HostName $hostname
User $username
IdentityFile $pathtoFile
Is there any way to replicate this for GCP VMs?
Thanks
According to This Doc
If you have already connected to an instance through the gcloud tool, your keys are already generated and applied to your project or instance. The key files are available in the following locations:
Linux and macOS
Public key: $HOME/.ssh/google_compute_engine.pub
Private key: $HOME/.ssh/google_compute_engine
Windows
Public key: C:\Users[USERNAME].ssh\google_compute_engine.pub
Private key: C:\Users[USERNAME].ssh\google_compute_engine
You can use the key with typical -i or in .ssh/config config file.
Or simply do
ssh-add ~/.ssh/google_compute_engine
to add the identity to your ssh agent.
PS> I've seen people create an alias for the ssh command, something like
alias gce='gcloud compute ssh'
If you want to SSH to different instances of a google cloud project (from a mac or Linux), do the following:
Step 1. Install SSH keys without password
Use the following command to generate the keys on your mac
ssh-keygen -t rsa -f ~/.ssh/<private-key-name> -C <your gcloud username>
For example private-key-name can be bpa-ssh-key. It will create two files with the
following names in the ~/.ssh directory
bpa-ssh-key
bpa-ssh-key.pub
Step 2. Update the public key on your GCP project
Goto Google Cloud Console, choose your project, then
VMInstances->Metadata->SSH Keys->Edit->Add Item
Cut and paste the contents of the bpa-ssh-key.pub (from your mac) here and then save
Reset the VM Instance if it is running
Step 3. Edit config file under ~/.ssh on your mac
Edit the ~/.ssh/config to add the following lines if not present already
Host *
PubKeyAuthentication yes
IdentityFile ~/.ssh/bpa-ssh-key
Step 4. SSHing to GCP Instance
ssh username#gcloud-externalip
It should create a SSH shell without asking for the password (since you have created the RSA/SSH keys without a password) on the gcloud instance.
Since Metadata is common across all instances under the same project, you can seam-lessly SSH into any of the instances by choosing the respective External IP of the gcloud instance.

How do I add pre-existing keys SSH to ansible? (crypto)

I am very new to ansible.
I have managed to install it and set up the ec2.py file via the git and set up the IAM root user. but my question is I already have a ec2 instance online that uses a .pem file that amazon has created. i use windows and have created the relative .ppk file when i try to ssh into that ec2 instance from another ec2 instance I see that via
cd ~/.ssh/ the files authorized_keys and known_hosts are created
but when i run ssh ubuntu#ec2-xx-xxx-xx-xxx.us-west-2....
I get a permission denied (puplickey)
I examined the contents of the authorized_keys file and the ppk and pem file and it seems that the public key is stored in the authorized_keys file correctly and the user is correct.
Am I correct in thinking that I need to copy the private key into this file?(although I don't really want to) or is it because I need a passphrase?
and in relation to ansible
How do I utilise this key to manage the host in the same VPC?
Edit (extra): I found out that the authorized_keys file is the file that contains the public key and fingerprint. when i edited the file i was no longer able to access the EC2 instance and it kept asking for a password and saying that the fingerprint had changed. so I guess that's why its best practice to create a ssh-key on the ansible system and then import into AWS
If you can ssh to the host in question via putty with key.ppk file, then:
convert key.ppk back into key.pem
place key.pem somewhere onto the control host (where Ansible is installed)
define inventory (hosts file) for Ansible:
myserver ansible_host=ip-or-dns-of-your-server ansible_user=your-user ansible_ssh_private_key_file=path/to/key.pem
run ansible myserver -m ping to confirm connectivity
This way Ansible will try to connect to your server aliased myserver at ip-or-dns-of-your-server with your-user account using path/to/key.pem private key.

AWS EC2 SSH with Private key to incorrect instance

I've setup a new EC2 Instance in AWS with a Private Key (downloaded and added to my ~/.ssh folder).
However, once the EC2 Instance has started, I try to ssh to that instance "a.a.a.a" public IP using the Private Key, however it logs me in to a different IP/instance.
Is there an ssh or private key cache of some sort I don't know about, or howcome I get ssh'd into a different EC2 Instance (in a different subnet)?
Instead of guessing, do this. Once you ssh into the instance, invoke ec2metadata which will list among other data, the private ip and public ip (if it is assigned one) of the instance.
/usr/bin/ec2metadata
~$ ec2metadata
ami-id: ami-xxxxx
...
availability-zone: us-east-1a
...
instance-id: i-8080abcd
instance-type: m3.medium
...
local-ipv4: 10.2.1.40
...
public-hostname: ec2-23-64-195-76.compute-1.amazonaws.com
public-ipv4: 23.64.195.76
...
In case you do not find ec2metadata, download it:
$ wget http://s3.amazonaws.com/ec2metadata/ec2-metadata
EC2 Instance Metadata Query Tool
I believe you have already found solution, if not, you may consider trying below.
In your Mac/Windows there should be a file called '/xxx/xxx/.ssh/known_hosts' and please try to find entries of both IP's and remove those lines (These lines are added when you are trying to SSH to new instance. So there could be some conflict due to the old entries). I had faced similar issues and I did and it was working. Thanks
You can still get all meta-data or user-data of the instance by doing simple http request using curl/wget inside your instance:
$ curl http://169.254.169.254/latest/meta-data/
It should return all keys that you might be interested in getting its value like the following:
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
kernel-id
local-hostname
local-ipv4
mac
metrics/
network/
placement/
profile
public-hostname
public-ipv4
public-keys/
reservation-id
security-groups
Now get only the key/data you want by specifying its name, the following will help distinguish your instance(s):
$ curl http://169.254.169.254/latest/meta-data/public-ipv4
$ curl http://169.254.169.254/latest/meta-data/local-ipv4
$ curl http://169.254.169.254/latest/meta-data/ami-id
$ curl http://169.254.169.254/latest/meta-data/public-hostname
$ curl http://169.254.169.254/latest/meta-data/local-hostname
$ curl http://169.254.169.254/latest/meta-data/mac
To know more about EC2 Metadata, you can check http://www.dowdandassociates.com/blog/content/howto-get-amazon-ec2-instance-metadata/

Uploading file to AWS from local machine

How to use scp command to upload file to aws server
I have .pem file in /Downloads in local machine
I am trying to copy file to /images folder in AWS server
What command can i use ?
Thanks,
You can use plain scp:
scp -i ~/Downloads/file.pem local_image_file user#ec2_elastic_ip:/home/user/images/
You need to put an Elastic IP to the EC2 instance, open port 22 to your local machine IP in the EC2 instance security group, and use the right user (it can be ec2-user, admin or ubuntu (look at the AMI documentation)).
Diego's answer works.. However, if you're unaware of your elastic IP, then you can simply scp using following command (check the order of arguments)
scp -i path-to-your-identifier.pem file-to-be-copied ubuntu#public-IP:/required-path
just for reference, here ubuntu is your AWS user and public-IP is somewhat like 54.2xx.xxx.xxx e.g. 54.200.100.100 or such
(If order is messed up: filename before identifier, then you'll get a Permission denied (publickey).lost connection error)
Also, keep in mind the permissions of .pem file.. Should be 400 or 600. Not public to all.
Hope it helps!
there are number of ways to achieve what you want
use s3cmd http://s3tools.org/s3cmd
or use cyberduck http://cyberduck.ch/
or write a tool using amazon Java API
You can try kitten utility which is a wrapper around boto3. You can easily upload/download files and run commands on EC2 server or on multiple servers at once for that matter.
kitten put -i ~/.ssh/key.pem cat.jpg /tmp [SERVER NAME][SERVER IP]
Where server name is e.g ubuntu or ec2-user etc.
This will upload cat.jpg file to /tmp directory of server
This is the correct way uploading from local to remote.
scp -i "zeus_aws.pem" ~/Downloads/retail_market_db-07-01-2021.sql ubuntu#ec2-3-108-200-27.us-east-2.compute.amazonaws.com:/var/www/
Could be a better approach
Another alternative way to scp is rsync.
Some of the benefits of rsync
faster - uploads only the deltas
enable compression
you can exclude some files from the upload
resume
limit the transfer bandwidth
The rsync cmd
rsync -ravze "ssh -i /home/your-user/your-key.pem " --exclude '.env' --exclude '.git/' /var/www/your-folder-to-upload/* ubuntu#xx.xxx.xxx.xxx:/var/www/your-remote-folder
Now, in case you find this syntax a little bit verbose you can use aws-upload which does all the above but you just tabbing.