I want to run queries by passing them as string to some supported command of AWS through its CLI.
I can see that the commands specific to AWS Redshift mentioned doesnt have anything which says that it can execute commands remotely
Link : https://docs.aws.amazon.com/cli/latest/reference/redshift/index.html
Need help on this.
You need to use psql. There is no API interface to redshift.
Redshift is based loosely on postgresql however so you can connect to the cluster using the psql command line tool.
https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-from-psql.html
You can use Redshift Data API to execute queries on Redshift using AWS CLI.
aws redshift-data execute-statement
--region us-west-2
--secret arn:aws:secretsmanager:us-west-2:123456789012:secret:myuser-secret-hKgPWn
--cluster-identifier mycluster-test
--sql "select * from stl_query limit 1"
--database dev
https://aws.amazon.com/about-aws/whats-new/2020/09/announcing-data-api-for-amazon-redshift/
https://docs.aws.amazon.com/redshift/latest/mgmt/data-api.html
https://docs.aws.amazon.com/cli/latest/reference/redshift-data/index.html#cli-aws-redshift-data
Related
I have like 15 Lightsail instances created on my AWS account and now I wanted to know when these Lightsail instances were created.
The creation date of the Lightsail instances on which date and time these were created. But is not able to find this information from the AWS Lightsail console.
You can use the AWS Command Line Interface (AWS CLI) to retrieve the creation date of your Lightsail instances. The following AWS CLI command will list all of your Lightsail instances and the creation date for each instance:
aws lightsail get-instances --query "instances[*].{Name:name, CreationDate:createdAt}"
This command uses the get-instances command to retrieve information about all of your Lightsail instances, and the --query option to extract the name and creation date of each instance.
Alternatively, you can also retrieve the creation date of a specific Lightsail instance by using the get-instance command and specifying the instance name:
aws lightsail get-instance --instance-name <instance_name> --query "instance.createdAt"
Replace <instance_name> with the name of the Lightsail instance you want to retrieve information for.
Note: The AWS CLI must be installed and configured on your local machine in order to use these commands.
If you haven't already done so, just follow these steps:
1- Install the AWS CLI:
You can install the AWS CLI on your local machine by following the installation instructions for your operating system. You can find the installation instructions at the following URL: https://aws.amazon.com/cli/
2- Configure the AWS CLI:
After installing the AWS CLI, you need to configure it with your AWS credentials. You can do this by running the following command:
aws configure
This will prompt you for your AWS access key ID, secret access key, default region name, and default output format. You can find your AWS access keys in the AWS Management Console.
3- Verify the configuration:
To verify that your AWS CLI is configured correctly, you can run the following command:
aws lightsail get-instances
This command should list all of your Lightsail instances in your AWS account.
With the AWS CLI installed and configured, you can now use the AWS CLI commands to retrieve information about your Lightsail instances, including the creation date.
I hope my answer helps you
I have been trying to stop multiple instances of RDS using a single command line but it does not seem to work.
Currently I can only make it work with one instance at a time with a command like this:
aws rds stop-db-instance --db-instance-identifier test-instance1 --region ap-southeast-1 --profile dev
However I would like to stop multiple RDS and this does not seem to work:
aws rds stop-db-instance --db-instance-identifier test-instance1 test-instance2 testinstance3 --region ap-southeast-1 --profile dev
Any idea or suggestion on how I can make this work?
If it is not possible I will probably create a CRON job instead using Lambda.
Sadly you can't do this. But you can write a simple bash for loop:
ids=(test-instance1 test-instance2 test-instance3)
for id in ${ids[#]};
do
echo "Stopping: ${id}"
aws rds stop-db-instance --db-instance-identifier ${id} --region ap-southeast-1 --profile dev
done
If writing a shell script that calls aws rds stop-db-instance multiple times, once per RDS instance, is problematic for you somehow, then consider doing this via a scheduled Lambda (think of it like crontab).
See Schedule Amazon RDS stop and start using AWS Lambda, which:
presents a solution using AWS Lambda and Amazon EventBridge that allows you to schedule a Lambda function to stop and start the idle databases with specific tags to save on compute costs.
I am using the following command to create AWS Aurora Serverless instance
aws rds create-db-cluster --db-cluster-identifier test-cluster --database-name testdb --master-username test --master-user-password testtest --engine aurora --engine-mode serverless --region us-east-1
but I am getting the following error.
Unknown options: --engine-mode, serverless
Above command works great on my AWS account but its not working on my clients account. (I just have programmatic access to that account). I have double check the permissions and I have the similar permissions as of my own account.
Summary: AWS command to create serverless aurora cluster is working on one account but not on another account with similar permissions.
Account 1:
Account2:
The error message states that it does not know about the engine-mode argument. This is a clear indication that your AWS CLI version is out dated. Serverless was added as part of a recent (late 2018) release, so you need to update your client's AWS CLI to recognize these inputs.
I have figured it out. I was using awscli version 1.14 on my server and 1.16 on my laptop. I updated the awscli and now its working fine.
sudo pip install --upgrade awscli
I am trying to set up some easy code to run when trying to spin up an EMR for some ad hoc work I have to do, time to time.
Right now I pass the 'aws emr create-cluster' command and then find the DNS in the console, once the cluster is created to then use ssh to connect.
I'd like to skip having to open the console at all, and use the cluster ID to get the DNS value to create my SSH connection, but I am not seeing a clear command to do this with. I'm new to CLI so I imagine this is a simple task I am merely failing at figuring out myself.
In my mind the solution should be something along the lines of
aws emr create-cluster [config for cluster here] > file.txt
set DNS = aws emr describe-cluster --cluster-id file.txt -MasterPublicDnsName
ssh -i Desktop/AWS/EMRKey.pem -o ServerAliveInterval=15 hadoop#$DNS
probably will have to append 'hadoop#' to the DNS variable before passing it into a command, but I'm more curious at the moment to if the above makes any functional sense, and if so, how I can get the functionality of the describe-cluster command to output the -MasterPublicDnsName, as that is obviously just something I made up and not an actual option that I have found.
The AWS CLI has a query option that lets you query the output of a command. You'll also want to use a waiter to make sure the instance is up before you try to connect to it.
You could simply run
cluster_id="j-2RNBSZZBLXTZ0"
aws emr wait cluster-running --cluster-id $cluster_id
hostname=`aws emr describe-cluster --output text --cluster-id $cluster_id --query Cluster.MasterPublicDnsName`
ssh hadoop#$hostname
That should work!
I have an AWS EMR cluster running spark, and I'd like to submit a PySpark job to it from my laptop (--master yarn) to run in cluster mode.
I know that I need to set up some config on the laptop, but I'd like to know what the bare minimum is. Do I just need some of the config files from the master node of the cluster? If so, which? Or do I need to install hadoop or yarn on my local machine?
I've done a fair bit of searching for an answer, but I haven't yet been able to be sure that what I was reading referred to launching a job from the master of the cluster or some arbitrary laptop...
If you want to run the spark-submit job solely on your AWS EMR cluster, you do not need to install anything locally. You only need the EC2 key pair you specified in the Security Options when you created the cluster.
I personally scp over any relevant scripts &/or jars, ssh into the master node of the cluster, and then run spark-submit.
You can specify most of the relevant spark job configurations via spark-submit itself. AWS documents in some more detail how to configure spark-submit jobs.
For example:
>> scp -i ~/PATH/TO/${SSH_KEY} /PATH/TO/PYSPARK_SCRIPT.py hadoop#${PUBLIC_MASTER_DNS}:
>> ssh -i ~/PATH/TO/${SSH_KEY} hadoop#${PUBLIC_MASTER_DNS}
>> spark-submit --conf spark.OPTION.OPTION=VALUE PYSPARK_SCRIPT.py
However, if you already pass a particular configuration when creating the cluster itself, you do not need to re-specify those same configuration options via spark-submit.
You can setup the AWS CLI on your local machine, put your deployment on S3, and then add an EMR step to run on the EMR cluster. Something like this:
aws emr add-steps --cluster-id j-xxxxx --steps Type=spark,Name=SparkWordCountApp,Args=[--deploy-mode,cluster,--master,yarn,--conf,spark.yarn.submit.waitAppCompletion=false,--num-executors,5,--executor-cores,5,--executor-memory,20g,s3://codelocation/wordcount.py,s3://inputbucket/input.txt,s3://outputbucket/],ActionOnFailure=CONTINUE
Source: https://aws.amazon.com/de/blogs/big-data/submitting-user-applications-with-spark-submit/