Transferring and updating files from local to AWS Ubuntu - amazon-web-services

So I have been using AWS Ubuntu EC2.
I used scp to transfer files from local machine to remote server. But, when I edited the same files in local machine and transferred them using scp again, the files were not changed in remote server.
scp -i path/to/.pem -r /path/folder_name ubuntu#ec2-xx-xxx-xx-x.compute.amazonaws.com:/new_path/folder_name
How can I fix this problem? Thanks.

Try rsync instead?
rsync -e 'ssh -i path/to/.pem' -av /path/folder_name 192.0.2.1:/new_path/folder_name

Related

Trying to download organization data to an external drive

I am trying to backup all of our Google Cloud data to an external storage device.
There is a lot of data so I am attempting to download the entire bucket at once and am using the following command to do so, but it halts saying that there isn't enough storage on the device to complete the transfer.
gsutil -m cp -r \
"bucket name" \
.
What do I need to add to this command to download this information to my local D: drive? I have searched through the available docs and have not been able to find the answer.
I used the gsutil command that GCP provided for me automatically, but it seems to be trying to copy the files to a destination without enough storage to hold the needed data.
Remember that you are running the command from the Cloud Shell and not in a local terminal or Windows Command Line. If you inspect the Cloud Shell's file system/structure, it resembles more that of a Unix environment in which you can specify the destination like such instead: ~/bucketfiles/. Even a simple gsutil -m cp -R gs://bucket-name.appspot.com ./ will work since Cloud Shell can identify the ./ directory which is the current directory.
A workaround to this is to perform the command on your Windows Command Line. You would have to install Google Cloud SDK beforehand.
Alternatively, this can also be done in Cloud Shell, albeit with an extra step:
Download the bucket objects by running gsutil -m cp -R gs://bucket-name ~/ which will download it into the home directory in Cloud Shell
Transfer the files downloaded in the ~/ (home) directory from Cloud Shell to the local machine either through the User Interface or by running gcloud alpha cloud-shell scp.

How to download an entire bucket in GCP?

I have a problem downloading entire folder in GCP. How should I download the whole bucket? I run this code in GCP Shell Environment:
gsutil -m cp -R gs://my-uniquename-bucket ./C:\Users\Myname\Desktop\Bucket
and I get an error message: "CommandException: Destination URL must name a directory, bucket, or bucket subdirectory for the multiple source form of the cp command. CommandException: 7 files/objects could not be transferred."
Could someone please point out the mistake in the code line?
To download an entire bucket You must install google cloud SDK
then run this command
gsutil -m cp -R gs://project-bucket-name path/to/local
where path/to/local is your path of local storage of your machine
The error lies within the destination URL as specified by the error message.
I run this code in GCP Shell Environment
Remember that you are running the command from the Cloud Shell and not in a local terminal or Windows Command Line. Thus, it is throwing that error because it cannot find the path you specified. If you inspect the Cloud Shell's file system/structure, it resembles more that of a Unix environment in which you can specify the destination like such instead: ~/bucketfiles/. Even a simple gsutil -m cp -R gs://bucket-name.appspot.com ./ will work since Cloud Shell can identify the ./ directory which is the current directory.
A workaround to this issue is to perform the command on your Windows Command Line. You would have to install Google Cloud SDK beforehand.
Alternatively, this can also be done in Cloud Shell, albeit with an extra step:
Download the bucket objects by running gsutil -m cp -R gs://bucket-name ~/ which will download it into the home directory in Cloud Shell
Transfer the files downloaded in the ~/ (home) directory from Cloud Shell to the local machine either through the User Interface or by running gcloud alpha cloud-shell scp
Your destination path is invalid:
./C:\Users\Myname\Desktop\Bucket
Change to:
/Users/Myname/Desktop/Bucket
C: is a reserved device name. You cannot specify reserved device names in a relative path. ./C: is not valid.
There is not a one-button solution for downloading a full bucket to your local machine through the Cloud Shell.
The best option for an environment like yours (only using the Cloud Shell interface, without gcloud installed on your local system), is to follow a series of steps:
Downloading the whole bucket on the Cloud Shell environment
Zip the contents of the bucket
Upload the zipped file
Download the file through the browser
Clean up:
Delete the local files (local in the context of the Cloud Shell)
Delete the zipped bucket file
Unzip the bucket locally
This has the advantage of only having to download a single file on your local machine.
This might seem a lot of steps for a non-developer, but it's actually pretty simple:
First, run this on the Cloud Shell:
mkdir /tmp/bucket-contents/
gsutil -m cp -R gs://my-uniquename-bucket /tmp/bucket-contents/
pushd /tmp/bucket-contents/
zip -r /tmp/zipped-bucket.zip .
popd
gsutil cp /tmp/zipped-bucket.zip gs://my-uniquename-bucket/zipped-bucket.zip
Then, download the zipped file through this link: https://storage.cloud.google.com/my-uniquename-bucket/zipped-bucket.zip
Finally, clean up:
rm -rf /tmp/bucket-contents
rm /tmp/zipped-bucket.zip
gsutil rm gs://my-uniquename-bucket/zipped-bucket.zip
After these steps, you'll have a zipped-bucket.zip file in your local system that you can unzip with the tool of your choice.
Note that this might not work if you have too much data in your bucket and the Cloud Shell environment can't store all the data, but you could repeat the same steps on folders instead of buckets to have a manageable size.

How to use sftp in cfncluster?

How can I transfer files using sftp to and from an AWS cluster created using cfncluster.
I have tried
sftp -i path/to/mykey.pem ec2-user#<MASTER.NODE.IP>
which produces
Connection closed
I also tried using Transmit and CyberDuck without any luck.
If you know a way of transfering files to and from cfncluster that does not use sftp please share that too.
You can add a post_intall variable in your config file that will include an extra script to be run after cfncluster deployment
post_intall=https://s3-eu-west-aws-xxxxx/your_script.sh
with your script being like:
#!/bin/bash
sudo sed -i '/Subsystem\ sftp.*$/d' /etc/ssh/sshd_config
sudo sed -i '$iSubsystem sftp internal-sftp' /etc/ssh/sshd_config
sudo service sshd restart
it's quite rough but it works...

AWS ubuntu instance scp fail

I want to copy local files to aws, and I used following command but it doesn't work, can anyone help? etl is the folder I created and want to put all files in
scp -i C:/Users/Bonnie/Downloads/afinn.txt ubuntu#ec2-54-69-164-253.us-west- 2.compute.amazonaws.com:/home/ubuntu/etl
thanks!
Dan, you are not passing the files you intend to copy to the remote server.
scp -i key filestocopy user#server:/folder/destination/

Get files from guest to host in vagrant

Can I retrieve the files from my vagrant machine (guest) and sync it to my host machine?
I know sync folders work the other way around but I was hoping there is a way to make it in reverse? Instead of synching files from the host machine to the guest machine; retrieve the files from inside the guest machine and have it exposed on the host machine.
Thanks.
Why not just put them in the /vagrant folder of your vagrant vm. This is a special mounted folder from the host (where Vagrantfile) resides to the guest.
This way you do not have to worry about doing any other copy operations between hosts.
$ ls
Vagrantfile
$ vagrant ssh
Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-42-generic x86_64)
Last login: Wed Oct 12 12:05:53 2016 from 10.0.2.2
vagrant#vagrant-virtualbox:~$ ls /vagrant
Vagrantfile
vagrant#vagrant-virtualbox:~$ cd /vagrant
vagrant#vagrant-virtualbox:/vagrant$ touch hello
vagrant#vagrant-virtualbox:/vagrant$ exit
logout
Connection to 127.0.0.1 closed.
$ ls
Vagrantfile hello
$
have you tried to scp files between your host and your virtual machine ? As I remember, the ssh login and password are "vagrant".
Something like this could do the job :
scp vagrant#<vm_ip>:<path_to_file> <path_to_dest_on_your_host>
Using scp with private key will make this much easy!
scp -i ./.vagrant/machines/default/virtualbox/private_key -r -P2222 vagrant#127.0.0.1:/home/vagrant/somefolder ./
You may want to try vagrant-rsync-back. I've not tried it yet.
Install python then run, e.g.
$ nohup python3.6 -m SimpleHTTPServer &
in the output directory. I put this command in a file set to run always during provisioning. This solution required zero system configuration.