AWS SQS --cli-input-json does not recognize attribute FifoQueue - amazon-web-services

I'm using aws cli 1.11.102 on Windows. The following two commands give me different results:
aws sqs create-queue --cli-input-json "{\"QueueName\":\"JustANormal_name\",\"Attributes\":{\"FifoQueue\":\"false\"}}"
This gives me an error:
An error occurred (InvalidAttributeName) when calling the CreateQueue operation: Unknown Attribute FifoQueue.
However I'm able to create a Fifo queue using
aws sqs create-queue --queue-name "Something.fifo" --attributes "{\"FifoQueue\":\"true\"}"
I've tried passing in other attributes in JSON format and the following line works.
aws sqs create-queue --cli-input-json "{\"QueueName\":\"my_team-std_queue-2\",\"Attributes\":{\"DelaySeconds\":\"10\"}}"
I've also verified I'm using N.Virginia for all the commands above. So I don't think the region is the problem.
--- Edit ---
Follow up with comment of John, putting FifoQueue="true" works fine. This has been added to the bug report and follow-ups go here:
AWS bug report

Related

Cannot list nor describe MSK topic's configuration with AWS CLI

Is it possible to use the AWS CLI tool to list the Kafka MSK topics and describe the configuration of them?
The AWS documentation defines the topic's arn as this: arn:aws:kafka:region:account-id:topic/cluster-name/cluster-uuid/topic-name
I tried to execute the following command (some parts of the id is replaced wit X and the topic name with Y):
aws --profile dev --region eu-central-1 kafka describe-configuration --arn 'arn:aws:kafka:eu-central-1:XXXXXXXXXXX:topic/sre-dev-central-km-msk/0c4e35a9-XXXX-4d32-XXXX-76aa15890225-8/YYYYYYY
But I get the following error:
An error occurred (BadRequestException) when calling the DescribeConfiguration operation: One or more of the parameters are not valid.
You are using topic ARN. But you should be using ARN of MSK configuration:
The Amazon Resource Name (ARN) that uniquely identifies an MSK configuration and all of its revisions.
You can use list-configurations to find ARN of configurations.

Invalid ARN when performing tagging operation on aws wafv2

I am trying to view and add tags on my web acls using aws wafv2 cli command.
Other command seems to be working properly but I am getting the following error when using ARN in tagging.
The command:
aws wafv2 list-tags-for-resource \
--resource-arn arn:aws:wafv2:us-east-1:<account_id>:global/webacl/<acl_name>/<acl_id>
Output:
An error occurred (WAFInvalidParameterException) when calling the TagResource operation:
Error reason: The ARN isn't valid. A valid ARN begins with arn: and includes other
information separated by colons or slashes., field: RESOURCE_ARN, parameter: <arn>
Any idea on why this is happening? I understand that the old version aws waf uses a different format. But I am using wafv2 now so I think I am using the correct URL format already.
Just confirmed the fix. As the comment above indicated, I just need to add the --region parameter and it needs to match the region indicated in the ARN.
Though I am not sure why this is happening as the region in my ~/.aws/config is default to us-east-1 already.

`aws iot-data` command and AWS reserved topics ($)

I'm newbie in AWS IoT and now try to play around with existing resources to understand the main concept.
I faced with an odd behaviour while using aws iot-data command trying to publish data into one of AWS reserved topics.
Let's say I want to update named Shadow called stubShadow of some stub thing (I'm using Test tab in the AWS IoT Dashboard):
aws iot-data update-thing-shadow --thing-name stub --shadow-name stubShadow \
--cli-binary-format raw-in-base64-out \
--payload '{"state":{"desired":{"ColorRGB":[0,11,11]}},"clientToken":"21b21b21-bfd2-4279-8c65-e2f697ff4fab"}' /dev/stdout
and it works pretty well, I can observe $aws/things/stub/shadow/name/stubShadow/update/accepted topic for updates.
Now I want to publish a message using topic argument. Here is an example:
aws iot-data publish --topic "$aws/things/stub/shadow/name/stubShadow/update" \
--cli-binary-format raw-in-base64-out \
--payload '{"state":{"reported":{"ColorRGB":[0,11,11]}},"clientToken":"21b21b21-bfd2-4279-8c65-e2f697ff4fab"}'
and nothing happens...
I wonder what's wrong with this command with sending direct message to a AWS Service topic? Am I miss something? Because for regular (manually created) topics it works well.
As the document says,
payload is the base64 encoded representation of stringify json message
{"state":{"reported":{"ColorRGB":[0,11,11]}}}
console.log(btoa(JSON.stringify({"state":{"reported":{"ColorRGB":[0,11,11]}}})))
payload : eyJzdGF0ZSI6eyJyZXBvcnRlZCI6eyJDb2xvclJHQiI6WzAsMTEsMTFdfX19
aws iot-data publish --topic "$aws/things/stub/shadow/name/stubShadow/update" --payload 'eyJzdGF0ZSI6eyJyZXBvcnRlZCI6eyJDb2xvclJHQiI6WzAsMTEsMTFdfX19'
When Using CLI, If you publish message, it wont reflect in shadow document. While when you give command Update Shadow- you can see the change in shadow.
This we recently observed while using MQTT Test client of AWS. Just open AWS IOT Core console and go to test client and subscribe to the topic that you are publishing.
You will observe Publish message is coming

Is there a simple way to clone a glue job, but change the database connections?

I have a large number of clients who supply data in the same format, and need them loading into identical tables in different databases. I have set up a job for them in Glue, but now I have to do the same thing another 20 times
Is there any way I can take an existing job and copy it, but with changes to the S3 filepath and the JDBC connection?
I haven't been able to find much online regarding scripting in AWS Glue. Would this be achievable through the AWS command line interface?
The quickest way would be to use the aws cli.
aws glue get-job --job-name <value>
where value is the specific job that you are trying to replicate. You can then alter the s3 path and JDBC connection info in the JSON that the above command returns. Also, you'll need to give it a new unique name. Once you've done that, you can pass that in to:
aws glue create-job --cli-input-json <value>
where value is the updated JSON that you are trying to create a new job from.
See AWS command line reference for more info on the glue command line
use the command
aws glue create-job --generate-cli-skeleton
to generate the skeleton JSON
Use the below command to get the existing job's definition
aws glue get-job --job-name <value>
Copy the values from the output of existing job's definition into skeleton
Remove the newline character and pass it as input to below command
aws glue create-job --cli-input-json <framed_JSON>
Here is the complete reference for Create Job AWS CLI documentation
https://docs.aws.amazon.com/cli/latest/reference/glue/create-job.html
PS: don't change the order of the elements in JSON (generated in skeleton), only update the connection and name
--cli-input-json (string) Performs service operation based on the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally.
--generate-cli-skeleton (string) Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input, prints a sample input JSON that can be used as an argument for --cli-input-json. If provided with the value output, it validates the command inputs and returns a sample output JSON for that command.
Thanks to the great answers here, you already know that the AWS CLI comes to the rescue.
Tip: if you don't want to install or update the AWS CLI, just use the AWS CloudShell!
I've tested the commands here using version:
$ aws --version
aws-cli/1.19.14 Python/3.8.5 Linux/5.4.0-65-generic botocore/1.20.14
If you want to create a new job from scratch, you'll want a template first, which you can get with:
aws glue create-job --generate-cli-skeleton > job_template.json
Then use your favourite editor (I like vim) to fill out the details in job_template.json (or whatever you call it).
But if DuckDuckGo or other engine sent you here, there's probably an existing job that you would like to clone and tweak. We'll call it "perfect_job" in this guide.
Let's get a list of all the jobs, just to check we're in the right place.
aws glue list-jobs --region us-east-1
The output shows us two jobs:
{
"JobNames": [
"perfect_job",
"sunshine"
]
}
View our job:
aws glue get-job --job-name perfect_job --region us-east-1
The JSON output looks right, let's put it in a file so we can edit it:
aws glue get-job --job-name perfect_job --region us-east-1 > perfect_job.json
Let's cp that to a new file, say  super_perfect_job.json. Now you can edit it to change the fields as desired. The first thing of course is to change the Name!
Two things to note:
Remove the outer level of the JSON, we need the value of Job not the Job identifier itself. If you look at job_template.json created above, you'll see that it must start with Name, so it's a small edit to match the format requirement.
There's no CreatedOn or LastModifiedOn in job_template.json either, so let's delete those lines too. Don't worry, if you forget to delete them, the creation will fail with a helpful message like 'Parameter validation failed: Unknown parameter in input: "LastModifiedOn"'.
Now we're ready to create the job! The following example will add Glue job "super_perfect_job" in the Cape Town region:
aws glue create-job --cli-input-json file://super_perfect_job.json --region af-south-1
But that didn't work:
An error occurred (InvalidInputException) when calling the CreateJob
operation: Please set only Allocated Capacity or Max Capacity.
I delete MaxCapacity and try again. Still not happy:
An error occurred (InvalidInputException) when calling the CreateJob
operation: Please do not set Allocated Capacity if using Worker Type
and Number of Workers.
Fine. I delete AllocatedCapacity and have another go. This time the output is:
{
    "Name": "super_perfect_job"
}
Which means, success! You can confirm by running list-jobs again. It's even more rewarding to open the AWS Console and see it pop up in the web UI.
We can't wait to run this job, so we'll use the CLI as well, and we'll pass three additional parameters: --fruit, --vegetable and --nut which our script expects. But -- would confuse the AWS CLI so let's store these in a file called args.json containing:
{
  "--fruit": "tomato",
  "--vegetable": "cucumber",
  "--nut": "almond"
}
And call our job like so:
aws glue start-job-run --job-name super_perfect_job --arguments file://args.json --region af-south-1
Or like this:
aws glue start-job-run --job-name super_perfect_job --arguments '{"--fruit": "tomato","--vegetable": "cucumber"}'
And you can view the status of job runs with:
aws glue get-job-runs --job-name super_perfect_job --region us-east-1
As you can see, the AWS Glue API accessed by the AWS CLI is pretty powerful, being not only convenient, but allowing automation in Continuous Integration (CI) servers like Jenkins, for example. Run aws glue help for more commands and quick help or see the online documentation for more details.
For creating or managing permanent infrastructure, it's preferable to use Infrastructure as Code tools, such as CloudFormation or Terraform.

An error occurred (InvalidParameter) when calling the AddPermission operation: Invalid parameter: Policy contains too many statements

We are trying to add permission to an SNS topic in account 'A'. A lambda function in account 'B' will invoke this. To do this, we used the CLI as below:
aws sns add-permission --topic-arn arn:aws:sns:us-east-1:<account_A>:djif-prod-policy-engine-config-sns --label lambda-<account_B>-us-east-2 --aws-account-id <account_B> --action-name Publish --region us-east-1
This returns the following error:
An error occurred (InvalidParameter) when calling the AddPermission operation: Invalid parameter: Policy contains too many statements!
Can someone help us figure out a way to resolve this. We created a lambda function in a different account (account C) and this command worked fine with no errors.
We figured this out. Whenever we run aws sns add-permission it updates the SNS topic policy. We had a bug in our code that called this multiple times for the same account (we are trying to invoke this SNS topic from multiple accounts). The AWS limit on the number of policies is 100 and when we hit this limit, we get the error.