Transferring Cloud Custodian output json file to S3 - amazon-web-services

I have a requirement. I am using CloudCustodian to get resources metadata in dev environment. I created one sample policy.yml file for EC2 like below:
policies:
- name: my-first-policy
resource: ec2
When I run this command from a ec2:
custodian run --dryrun -s . policy.yml
I can see in the root directory one directory has been created with "my-first-policy". In this directory there is one file resource.json which includes all the details for EC2 instance. I want to send this file to s3 whenever I run cloud custodian command. How can I do this from command line?
Is there any policy that can be written which would transfer the resource.json file to S3 whenever I run the command?

You can supply the S3 bucket as a value to the -s / --output-dir argument
custodian run --dryrun -s s3://mys3bucketpath policy.yml
Then you can see the output stored in s3 directly
aws s3 ls s3://mys3bucketpath
References:
https://cloudcustodian.io/docs/aws/usage.html#s3-logs-records

Related

How to redirect AWS S3 sync output to a file?

When I run, AWS S3 SYNC "local drive" "S3bucket", I see bunch of logs getting generated on my aws cli console. Is there a way to direct these logs to an output/log file for future reference?
I am trying to schedule a sql job which executes the powershell script that syncs backup from local drive to S3 bucket. Backups are getting synched to the bucket successfully. However, I am trying to figure out a way to direct the sync progress to an output file. Help appreciated. Thanks!
Simply pipe the output of the command into a file using the ">" symbol.
The file does not have to exist before hand (and in-fact will be overwritten if it does exist).
aws s3 sync . s3://mybucket > log.txt
If you wish to append to the given file then use the following operator: ">>".
aws s3 sync . s3://mybucket >> existingLogFile.txt
To test this command, you can use the --dryrun argument to the sync command:
aws s3 sync . s3://mybucket --dryrun > log.txt

Can The AWS CLI Copy From S3 To EC2?

I'm familiar with running the AWS CLI command to copy from a folder to S3 or from one S3 bucket to another S3 bucket:
aws s3 cp ./someFile.txt s3://bucket/someFile.txt
aws s3 cp s3://bucketSource/someFile.txt s3://bucketDestination/someFile.txt
But is it possible to copy files from S3 to an EC2-Instance when you're not on the EC2-Instance? Something like:
aws s3 cp s3://bucket/folder/ ec2-user#1.2.3.4:8080/some/folder/
I'm trying to run this from Jenkins which is why I can't simply run the command on the EC2 like this:
aws s3 cp s3://bucket/folder/ ./my/destination/folder/on/the/ec2
Update:
I don't think this is possible so I'm going to look into using https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html
No.
The AWS CLI calls the AWS API. The APIs for Amazon S3 do not have the ability to interact with the operating system on an Amazon EC2 instance.
Your idea of using AWS Systems Manager is a good idea. You can send the command to the instance itself, and the instance can then upload/download objects to Amazon S3.
Since you have SSH access, you could also just run
ssh ec2-user#1.2.3.4:8080 "aws s3 cp s3://bucket/folder/ ./my/destination/folder/on/the/ec2"
... to run the command on the EC2 instance directly.
It's not as efficient as using send-command (because ssh will necessarily pipe the output of that command to your local terminal) but, if you're not transferring millions of files, the tradeoff in simplicity may be acceptable for you.
Using AWS System Manager send command :
#Copying file from S3 bucket to EC2 instance :
$Instance_Id='i-0123456xxx'
aws ssm send-command --document-name "AWS-RunShellScript" --document-version "\$DEFAULT" --targets "Key=instanceids,Values='$Instance_Id'" --parameters '{"commands":["aws s3 cp s3://s3-bucket/output-path/file-name /dirName/ "]}' --timeout-seconds 600 --max-concurrency "50" --max-errors "0" --region REGION_NAME

How can I transfer a remote file to my S3 bucket using AWS CLI?

I tried to follow advice provided at https://stackoverflow.com/a/18136205/6608952 but was unsure how to share myAmazonKeypair path in a .pem file on the remote server.
scp -i yourAmazonKeypairPath.pem fileNameThatYouWantToTransfer.php ec2-user#ec2-00-000-000-15.us-west-2.compute.amazonaws.com:
The command completed after a few minutes with this display:
ssh: connect to host
myBucketEndpointName
port 22: Connection timed out
lost connection
I have a couple of very large files to transfer and would prefer not to have to download the files to my local computer and then re-upload them to the S3 bucket.
Any suggestions?
There is no direct way to upload files to S3 from a remote location. i.e a URL
So to achieve that, you have two options :
Download the file on your local machine and then upload it via AWS Console or AWS CLI.
Download the file in AWS EC2 Instance and upload to S3 by AWS CLI.
The first method is pretty simple, not much explanation needed.
But for the second method, you'll need to do :
Create an EC2 Instance in the same region as the S3 Bucket is. Or if you already have an Instance, then login/ssh to it.
Download the file from the source to the EC2 Instance. via wget or curl whichever is comfortable.
Install AWS CLI on the EC2 Instance.
Create IAM User and Grant him Permission for your S3 Bucket.
Configure your AWS CLI with your IAM Credentials.
Upload your file to S3 Bucket with AWS CLI S3 CP Utility.
Terminate the Instance, if you set up the instance only for this.
Do it with shell script easily. If you have a list of URLs in files.txt do it like it is described here:
#!/bin/bash
input="files.txt"
while IFS= read -r line do
name=$(basename "$line")
echo $name
wget $line
aws s3 mv $name <YOUR_S3_URI>
done < "$input"
Or for one file:
wget <FILE_URL> | aws s3 mv <FILE_NAME> <YOUR_S3_URI>

How to move files from amazon ec2 to s3 bucket using command line

In my amazon EC2 instance, I have a folder named uploads. In this folder I have 1000 images. Now I want to copy all images to my new S3 bucket. How can I do this?
First Option sm3cmd
Use s3cmd
s3cmd get s3://AWS_S3_Bucket/dir/file
Take a look at this s3cmd documentation
if you are on linux, run this on the command line:
sudo apt-get install s3cmd
or Centos, Fedore.
yum install s3cmd
Example of usage:
s3cmd put my.file s3://pactsRamun/folderExample/fileExample
Second Option
Using Cli from amazon
Update
Like #tedder42 said in the comments, instead of using cp, use sync.
Take a look at the following syntax:
aws s3 sync <source> <target> [--options]
Example:
aws s3 sync . s3://my-bucket/MyFolder
More information and examples available at Managing Objects Using High-Level s3 Commands with the AWS Command Line Interface
aws s3 sync your-dir-name s3://your-s3-bucket-name/folder-name
Important: This will copy each item in your named directory into the s3 bucket folder you selected. This will not copy your directory as a whole.
Or, you can use the following command for one selected file.
aws s3 sync your-dir-name/file-name s3://your-s3-bucket-name/folder-name/file-name
Or you can use a wild character to select all. Note that this will copy your directory as a whole and also generate metadata and save them to your s3 bucket folder.
aws s3 sync . s3://your-s3-bucket-name/folder-name
To copy from EC2 to S3 use the below code in the Command line of EC2.
First, you have to give "IAM Role with full s3 Access" to your EC2 instance.
aws s3 cp Your_Ec2_Folder s3://Your_S3_bucket/Your_folder --recursive
Also note on aws cli syncing with s3 it is multithreaded and uploads multiple parts of a file at one time. The number of threads however, is not configurable at this time.
aws s3 mv /home/inbound/ s3://test/ --recursive --region us-west-2
This can be done very simply. Follow the following steps:
Open the AWS EC2 on console.
Select the instance and navigate to actions.
Select instances settings and select Attach/Replace IAM Role
When this is done, connect to the AWS instance and the rest will be done via the following CLI commands:
aws s3 cp filelocation/filename s3://bucketname
Hence you don't need to install or do any extra efforts.
Please note... the file location refers to the local address. And the bucketname is the name of your bucket.
Also note: This is possible if your instance and S3 bucket are in the same account.
Cheers.
We do have a dryrun feature available for testing.
To begin with I would assign ec2-instance a role to be able read
write to S3
SSH into the instance and perform the following
vi tmp1.txt
aws s3 mv ./ s3://bucketname-bucketurl.com/ --dryrun
If this works then all you have to do is either create a script to
upload all files with specific from this folder to s3 bucket
I have done the wrritten the following command in my script to move
files older than 2 minutes from current directory to bucket/folder
cd dir; ls . -rt | xargs -I FILES find FILES -maxdepth 1 -name
'*.txt' -mmin +2 -exec aws s3 mv '{}' s3://bucketurl.com

syncing AWS S3 bucket to EC2 Servers

I am trying to use rsync to sync my S3 bucket with my EC2 servers. However I am having trouble coming up with the code. On my EC2 server I have tried the following, but it doesn't work. I know my S3 address is wrong but I'm not sure what to put in its place. iosSourceCode is the bucket name. How can I sync the files in this bucket to my EC2 server's files? After I get this to work I was going to set up a cronjob to do this every 10 minutes or whatever. Is there a better way to do this and if so how? Please provide code, thanks!
sudo rsync -ra iosSourceCode.s3-website-us-east-1.amazonaws.com /var/www/
You can use this command:
aws s3 --region <your region name> sync s3://<your bucket name> /your/directory/path
So in your case:
aws s3 --region us-east-1 sync s3://osSourceCode.s3-website /var/www/
This is a one-way sync (only downloads updates from S3 in this example) so if you want to sync both ways then you need two commands with the source and destination swapped around:
aws s3 --region us-east-1 sync s3://osSourceCode.s3-website /var/www/
aws s3 --region us-east-1 sync /var/www s3://osSourceCode.s3-website
You can add this to a crontab entry to make it occur periodically, as per the following example:
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin
MAILTO=root
HOME=/
*/2 * * * * root aws s3 sync --region ap-southeast-2 /var/www/html s3://mywebsitecode/ >> /usr/local/logs/aws.log 2>&1
*/5 * * * * root aws s3 sync --region ap-southeast-2 s3://mywebsitecode/ /var/www/html >> /usr/local/logs/aws.log 2>&1
Please use s3cmd
http://s3tools.org/s3cmd
use s3cmd sync
syntax will be as below
s3cmd sync s3://mybucket/myfolder/files/ /var/mybucket/myfolder/files/
You can put above syntax in shell script and add script to cron to run it as specific time interval.
You might want to have a look at the s3 sync command that is part of the Amazon CLI (Command line interface).