I am unable to download complete logs using AWS CLI for an postgres RDS instance -
aws rds download-db-log-file-portion \
--db-instance-identifier $INSTANCE_ID \
--starting-token 0 --output text \
--max-items 99999999 \
--log-file-name error/postgresql.log.$CDATE-$CHOUR > DB_$INSTANCE_ID-$CDATE-$CHOUR.log
The log files which I see in console shows it's of ~10GB but using the CLI I always get a log file of just ~100MB.
Ref - https://docs.aws.amazon.com/cli/latest/reference/rds/download-db-log-file-portion.html
AWS docs -
In order to download the entire file, you need --starting-token 0
parameter:
aws rds download-db-log-file-portion --db-instance-identifier test-instance \
--log-file-name log.txt --starting-token 0 --output text > full.txt
Can someone please suggest.
You can download the complete file via AWS Console.
But, if you can download via script, I recommend use this method:
https://linuxtut.com/en/fdabc8bc82d7183a05f3/
Please, update variables values into script.
profile = "default"
instance_id = "database-1"
region = "ap-northeast-1"
I've found it much simpler to just curl the link the console gives you. Just output it to a file and you're done. Make sure you have ample space on disk.
Related
Is there a way i can copy a file from my s3 bucket to an windows ec2 instance?
I have tried the following way using send command.. it returns success but file is not being copied.. need help
sh """
aws ssm send-command --instance-ids ${Instance_Id} --document-name "AWS-RunPowerShellScript" --parameters '{"commands":["Read-S3Object -BucketName s3://{bucket-name} file.pfx -File file.pfx"]}' --timeout-seconds 600 --max-concurrency "50" --max-errors "0" --region eu-west-1
"""
I believe the command you pasted is wrong, or you might have copy/pasted wrong:
Considering you are running awscli and sending PowerShell command to be run within the instance, below 2 documents are worth referring.
Send-command CLI: https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html
Read-S3Object CmdLet: https://docs.aws.amazon.com/powershell/latest/reference/items/Read-S3Object.html
SSM returning success would still only mean that it was able to execute the underlying plugin (in this case runpowershellscript) - regardless of the fact it was successfully executed or not. In order to investigate why it did not copy the file, you may start with checking the output of the ssm command.
Having said that, below is a working syntax of file copy from s3 object using runPowerShellScript:
aws ssm send-command --instance-ids $instance --document-name "AWS-RunPowerShellScript" --parameters commands=["Read-S3Object -BucketName $bucket -key get-param.reg -File c:\programdata\get-param.reg"]
SSM also provides a way to download s3 object with its own plugin aws:downloadContent
https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-plugins.html#aws-downloadContent
This would require you to create a custom document (you should find example in the above doc) and just run that document to get the s3 object into windows/linux instance.
I hope this helps.
Here is how I would accomplish what you are attempting:
Instead of AWS-RunPowerShellScript SSM document, use the SSM document AWS-RunRemoteScript.
What this document allows you to do is run a script on the ec2 instance, and then inside of the script you can have it download the files you're looking for in the s3 bucket using the aws s3api cli.
It would look something like this:
aws ssm send-command --document-name "AWS-RunRemoteScript" --document-version "1" --instance-ids $instance --parameters "sourceType=S3, sourceInfo=path:\"[url to script that is stored in s3]", commandLine=".\[name of script]", workingDirectory=\"\", executionTimeout=3600" --timeout-seconds 600 --max-concurrency "50" --max-errors "0"
The powershell script that you upload to s3 will look something like this:
aws s3api get-object --bucket [bucket name here] --key [s3 path (not url)] [path to where you want it downloaded]
To make this work, you need to make sure that the ec2 instance has permissions to read from your s3 bucket. You can do this by attaching an s3 full access policy to your ec2 security role in IAM.
I am trying to get the RDS endpoint to use in user data with cli but unable to figure it out.
I need to get the RDS endpoint to inject into a php file but when I try the following I get:
Unable to locate credentials. You can configure credentials by running "aws configure".
I am building the ec2 and vpc using CLI and need to be able to get RDS endpoint as part of the Userdata.
I tried the following on the EC2 instance itself and I get the above error.
aws rds --region ca-central-1 describe-db-instances --query "DBInstances[*].Endpoint.Address"
Even if I am able to resolve that, I need to be able to get the endpoint to pass as part of the userdata. Is that even possible?
The Unable to locate credentials error says that the AWS Command-Line Interface (CLI) does not have any credentials to call the AWS APIs.
You should assign a role to the EC2 instance with sufficient permission to call describe-db-instances on RDS. See: IAM Roles for Amazon EC2
Then, your User Data can include something like:
#!
RDS=`aws rds --region ca-central-1 describe-db-instances --query "DBInstances[*].Endpoint.Address"`
echo >file $RDS
Or pass it as a parameter:
php $RDS
I have it working with this -
mac=curl -s http://169.254.169.254/latest/meta-data/mac
VPC_ID=curl -s http://169.254.169.254/latest/meta-data/network/interfaces/macs/$mac/vpc-id
aws rds describe-db-instances --region us-east-2 | jq -r --arg VPC_ID "VPC_ID" '.DBInstances[] |select (.DBSubnetGroup.VpcId=="'$VPC_ID'") | .Endpoint.Address'
I'm trying to create a chatbot using aws-cli .Going through the Steps in Documentation in https://docs.aws.amazon.com/lex/latest/dg/gs-create-flower-types.html
I couldn't understand what endpoint did it mean in the documentation as shown in the syntax.
aws lex-models put-slot-type \
--region region \
--endpoint endpoint \
--name FlowerTypes \
--cli-input-json file://FlowerTypes.json
What is the endpoint in the above syntax?
You can find the list of endpoints for Lex at this link
For your current case, https://models.lex.us-east-1.amazonaws.com/ will work as endpoint, given that your region is us-east-1.
Below code will work if you are using Windows machine:
aws lex-models put-slot-type ^
--region us-east-1 ^
--endpoint https://models.lex.us-east-1.amazonaws.com/ ^
--name FlowerTypes ^
--cli-input-json file://FlowerTypes.json
Keep the input json file in the same folder where you have opened the CLI.
I need to upload the updated files into multiple ec2 instace which is under single LB. My problem is I missed some ec2 instance and it broke my webpage.
Is there any tool available to upload the multiple files to multiple EC2 windows server in a single click.
I will update my files weekly or some times daily. I checked with Elastic beanstalk , Amazon Code Deploy and Amazon EFS. But the are hard to use. Anyone please help
I will suggest use AWS S3 and AWS CLI. What you can do is install AWS CLI on all the EC2 instance. Create a Bucket in AWS S3.
Start a Cron Job on each EC2 instance with below syntax.
aws s3 sync s3://bucket-name/folder-on-bucket /path/to/local/folder
So what will happen is when you upload new images to the S3 bucket all images will automatically sync with all the EC2 instances behind your load balancer. And also AWS s3 will be central directory where you upload and delete images.
You could leverage the AWS CLI, you could run something like
aws elb describe-load-balancers --load-balancer-name <name_of_your_lb> --query LoadBalancerDescriptions[].Instances --output text |\
xargs -I {} aws ec2 describe-instances --instance-id {} --query Reservations[].Instances[].PublicIpAddress |\
xargs -I {} scp <name_of_your_file> <your_username>#{}:/some/remote/directory
basically it goes like this:
find out all the ec2 instances connected to your Load Balancer
for each of the ec2 instances, find out the PublicIPAddress (supposedly you have since you can connect to them through scp)
run scp command to copy 1 files somewhere on the ec2 server
you can copy also copy folder if you need to push many files , it might be easier
Amazon ElasticFileSystem would probably now be the easiest option, you would create your file system and attach it to all your ec2 instances that are attached to the Load Balancer, and when you transfer files to the EFS it will be available to all the ec2 instances where the EFS is attached
(the setup to create EFS and mount it to your ec2 instances has to be done once only)
Create a script containing some robocopy commands and run it when you want to update the files on your servers. Something like this:
robocopy Source Destination1 files
robocopy Source Destination2 files
You will also need to share the folder you want to copy to with the user on your machine.
I had an application load balancer (alb), so I had to build on #FredricHenri's answer
EC2_PUBLIC_IPS=`aws elbv2 --profile mfa describe-load-balancers --names c360-infra-everest-dev-lb --query 'LoadBalancers[].LoadBalancerArn' --output text | xargs -n 1 -I {} aws elbv2 --profile mfa describe-target-groups --load-balancer-arn {} --query 'TargetGroups[].TargetGroupArn' --output text | xargs -n 1 -I {} aws elbv2 --profile mfa describe-target-health --target-group-arn {} --query 'TargetHealthDescriptions[*].Target.Id' --output text | xargs -n 1 -I {} aws ec2 --profile mfa describe-instances --instance-id {} --query 'Reservations[].Instances[].PublicIpAddress' --output text`
echo $EC2_PUBLIC_IPS
echo ${EC2_PUBLIC_IPS} | xargs -n 1 -I {} scp -i ${EC2_SSH_KEY_FILE} ../swateek.txt ubuntu#{}:/home/ubuntu/
Points to Note
I have used an AWS profile called "MFA", this is optional
The other environment variables EC2_SSH_KEY_FILE is the name of the .pem file used to access the EC2 instance.
I can call aws rds describe-db-snapshots --db-instance-identifier {my_db_instance} and sort all automated snapshots to find the most recently created one but I was hoping someone has a better idea out there.
For me, this one works:
aws rds describe-db-snapshots \
--query="max_by(DBSnapshots, &SnapshotCreateTime)"
The query parameter returns only the most recent one.
If only the Arn is needed, this one might help:
aws rds describe-db-snapshots \
--query="max_by(DBSnapshots, &SnapshotCreateTime).DBSnapshotArn" \
--output text
And all that for a specific database instance:
aws rds describe-db-snapshots \
--db-instance-identifier={instance identifier} \
--query="max_by(DBSnapshots, &SnapshotCreateTime).DBSnapshotArn" \
--output text
I know this is old, but I was needing to know the same information and was able to construct the following which will then just give me the snapshot name. It doesn't totally answer your question about emphatically finding the latest snapshot but in this example might give you some better direction.
aws rds describe-db-snapshots --db-instance-identifier prd --snapshot-type automated --query "DBSnapshots[?SnapshotCreateTime>='2017-06-05'].DBSnapshotIdentifier"
To break it down with the options
--db-instance-identifier (put in your instance name your are looking for)
--snapshot-type (I put in automated to find the automated backups)
--query "DBSnapshots[?SnapshotCreateTime>='2017-06-05'].DBSnapshotIdentifier"
(This is what I used to refine my search as we do daily backups, I just look for the snapshot create time to be greater than today and by giving the .DBSnapshotIdentifier gives me back just the name.
Hopefully this will help somebody else out.
My way:
> aws rds describe-db-snapshots --db-instance-identifier ${yourDbIdentifier} --query="reverse(sort_by(DBSnapshots, &SnapshotCreateTime))[0]|DBSnapshotIdentifier"
> "rds:dbName-2018-06-20-00-07"
If someone is looking for cluster command:
aws rds describe-db-cluster-snapshots --db-cluster-identifier prod --snapshot-type automated --query "DBClusterSnapshots[?SnapshotCreateTime>='2017-06-05'].DBClusterSnapshotIdentifier"
As at 31th October 2014, it looks like you can use the --t flag to list only automated backups.
http://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-DescribeDBSnapshots.html
From there, you should be able to parse the output to determine your latest snapshots.
rds-describe-db-snapshots --t automated
DBSNAPSHOT rds:<NAME>-2016-08-09-17-12
There is no any other more simple way around for this.
I am getting this error while restoring the db from snapshot with the id that I get with the command from the above methods:
An error occurred (InvalidParameterValue) when calling the RestoreDBInstanceFromDBSnapshot operation: Invalid snapshot identifier: "rds:dev-mysql-rds1-2018-10-06-01-09"
So, I have modified the above query to make it work for me, here is a query that worked for me to get the latest snapshot that worked with restore-db-instance-from-db-snapshot
aws rds describe-db-snapshots --query "DBSnapshots[?DBInstanceIdentifier=='MASTER_INSTANCE_IDENTIFIER']" | jq -r 'max_by(.SnapshotCreateTime).DBSnapshotIdentifier'
aws rds describe-db-cluster-snapshots --snapshot-type=automated --query="max_by(DBClusterSnapshots,&SnapshotCreateTime)"
This works in 2022.08
If it is RDS cluster then you can use below command:
aws rds describe-db-cluster-snapshots --db-cluster-identifier <DBClusterIdentifier> --region <region> --query="max_by(DBClusterSnapshots, &SnapshotCreateTime)"
you can use below command to fetch specific snapshot ARN:
aws rds describe-db-cluster-snapshots --db-cluster-identifier <DBClusterIdentifier> --region <region> --query="max_by(DBClusterSnapshots, &SnapshotCreateTime).DBClusterSnapshotArn"