`aws iot-data` command and AWS reserved topics ($) - amazon-web-services

I'm newbie in AWS IoT and now try to play around with existing resources to understand the main concept.
I faced with an odd behaviour while using aws iot-data command trying to publish data into one of AWS reserved topics.
Let's say I want to update named Shadow called stubShadow of some stub thing (I'm using Test tab in the AWS IoT Dashboard):
aws iot-data update-thing-shadow --thing-name stub --shadow-name stubShadow \
--cli-binary-format raw-in-base64-out \
--payload '{"state":{"desired":{"ColorRGB":[0,11,11]}},"clientToken":"21b21b21-bfd2-4279-8c65-e2f697ff4fab"}' /dev/stdout
and it works pretty well, I can observe $aws/things/stub/shadow/name/stubShadow/update/accepted topic for updates.
Now I want to publish a message using topic argument. Here is an example:
aws iot-data publish --topic "$aws/things/stub/shadow/name/stubShadow/update" \
--cli-binary-format raw-in-base64-out \
--payload '{"state":{"reported":{"ColorRGB":[0,11,11]}},"clientToken":"21b21b21-bfd2-4279-8c65-e2f697ff4fab"}'
and nothing happens...
I wonder what's wrong with this command with sending direct message to a AWS Service topic? Am I miss something? Because for regular (manually created) topics it works well.

As the document says,
payload is the base64 encoded representation of stringify json message
{"state":{"reported":{"ColorRGB":[0,11,11]}}}
console.log(btoa(JSON.stringify({"state":{"reported":{"ColorRGB":[0,11,11]}}})))
payload : eyJzdGF0ZSI6eyJyZXBvcnRlZCI6eyJDb2xvclJHQiI6WzAsMTEsMTFdfX19
aws iot-data publish --topic "$aws/things/stub/shadow/name/stubShadow/update" --payload 'eyJzdGF0ZSI6eyJyZXBvcnRlZCI6eyJDb2xvclJHQiI6WzAsMTEsMTFdfX19'

When Using CLI, If you publish message, it wont reflect in shadow document. While when you give command Update Shadow- you can see the change in shadow.
This we recently observed while using MQTT Test client of AWS. Just open AWS IOT Core console and go to test client and subscribe to the topic that you are publishing.
You will observe Publish message is coming

Related

submitting a curl command from an ec2 instance does not send credentials of the attached security profile

I have an ec2 instance with a instance profile attached to it. This instance profile has permissions to publish messages to a sns topic. When I remote into the ec2 instance and issue a command like
aws sns publish --topic-arn topic_arn --message hello
This works.
Now I'm trying to do the same with a simple curl command and this is what I use after remoting into the ec2 instance
curl "https://sns.us-west-2.amazonaws.com/?Message=hello&Action=Publish&TargetArn=topic_arn"
I get the following error
<Code>MissingAuthenticationToken</Code>
<Message>Request is missing Authentication Token</Message>
I was hoping that the curl would attach the instance profile details when sending the request (like when using the aws cli) but it does not seem to do so. Does anyone know how I can overcome this ?
When you do:
curl "https://sns.us-west-2.amazonaws.com/?Message=hello&Action=Publish&TargetArn=topic_arn"
you are directly making a request to the SNS endpoint. Which means that you have to sign your request with AWS credentials from your instance profile. If you don't want to use AWS CLI or any AWS SDK for accessing the SNS, you have to program the entire signature procedure yourself as described in the docs.
That's why, when you are using AWS CLI
aws sns publish --topic-arn topic_arn --message hello
everything works, because the AWS CLI makes a signed request to the SNS endpoint on your behalf.

add trigger to lambda function using cli

I am trying to add a trigger rule to a lambda version using cli:
I try the following command:
aws events put-targets --rule rule-name --targets "Id"="1","Arn"="arn..."
This commands run successfully and I can see my lambda function in Event Bridge console under targets. But when I go to lambda function and to the version I don't see any trigger event being added.
I am not sure if this an error/bug or expected behavior. Is there a way to add a trigger event to a published version of lambda function such that it shows in trigger console (essentially to show that trigger event is added successfully) using aws cli.
Use CDK. It will work
Create a lambda function and a rule using cdk. Then you can add that rule to lambda.
This works with CDK. But it doesn't work with CLI as you said. The trigger doesn't get added in lambda.
Sample code:
Note: This is not the complete CDK code. This is just the part for creating lambda,rule and adding it to lambda. This example is in Python
fn = lambda_.Function(self, "Name",
runtime=lambda_.Runtime.PYTHON_3_7,
handler="index.lambda_handler",
role=custom_role,
code=lambda_.Code.from_asset(
os.path.join(
up_dir(__file__, 2),
"resources/lambda/pathtoyourcode",
)
),
)
# Run Every Minute
run_every_minute = _events.Rule(
self,
"runEveryMinute",
schedule=_events.Schedule.rate(core.Duration.minutes(1))
)
# Add Lambda to CW Event Rule
run_every_minute.add_target(_targets.LambdaFunction(fn))
Via awscli > $ aws s3api put-bucket-notification-configuration
CONSOLE
I have had the same problem, it's a little bit frustating but, i've found other way and maybe a more logical way. Triggers in Lambda Console only support a few message notification services. And seems to be mostly for test purposes. Although, there's a way to invoke your lambda function from an event in S3.
To configure S3 to send some event file at some lambda function from some event occurs on your bucket, just go to your bucket through this path in S3 Console:
BucketName > Properties > EventNotifications !
AWSCLI
there you can configure your event source, even awscli support it vi 's3api' service command:
#$ aws s3api put-bucket-notification # Deprecated
#$ aws s3api put-bucket-notification-configuration
the last one support the following destination from S3:
Lambda functions
SNS Topic
SQS Queue
Ref using S3 Triggers with Lambda https://docs.aws.amazon.com/lambda/latest/dg/with-s3-tutorial.html#with-s3-tutorial-configure-event-source
It seems like this is not possible at the moment. I have checked the aws-sdk and there is a createEventSourceMapping method but that one only allows for DynamoDB, Kinesis, etc.
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html#createEventSourceMapping-property

AWS guardduty generate sample event and generate cloudwatch event

I'm working on a Lambda function to process AWS GuardDuty findings.
I'd like to generate sample events, which is easily done using the CreateSampleFindings API call or create-sample-findings cli command.
I have a custom cloudwatch rule that responds to the following event Pattern which triggers my Lambda function:
{
"detail-type": [
"GuardDuty Finding"
],
"source": [
"aws.guardduty"
]
}
Generating the first sample finding easily triggers a cloudwatch event
$ aws guardduty create-sample-findings \
--detector-id abcd12345efgh6789 \
--finding-types Recon:EC2/PortProbeUnprotectedPort
However when I call this same command again, the count of the finding in guard duty increments, but no more cloudwatch events are generated.
$ aws guardduty get-findings \
--detector-id abcd12345efgh6789 \
--finding-ids zyxwv987654acbde1234 \
--query "Findings[].Service.Count" --output text
$ 2
I understand why this behavior is in place, as the findings are grouped by unique signature and triggering cloudwatch events for each instance of a unique finding would be too much noise
However for developing/debugging purposes, is there a way I can generate multiple sample events that will trigger a cloudwatch event?
For anyone that comes across this for testing purposes disabling GuardDuty and then reenabling allows you to regenerate sample findings that trigger the CloudWatch event. This method has worked for me while creating a log forwarder for GuardDuty.
As #jl-dos has pointed out you can just disable/enable GD. But what that effectively does is to delete all findings for this GD instance, so when you go to create sample findings they are brand new an trigger the CloudWatch events.
The other option I've found is to archive the current findings. Then when you create new sample findings they will come out as brand new ones and not just increment the counters. This should also trigger a CloudWatch event.
To do that use a combination of aws guardduty get-findings and aws guardduty archive-findings commands.

Using AWS SNS when ec2 instance is deployed in us-west-1

I have a quick question about usage of AWS SNS.
I have deployed an EC2 (t2.micro, Linux) instance in us-west-1 (N.California). I have written a python script using boto3 to send a simple text message to my phone. Later I discovered, there is no SNS service for instances deployed out of us-east-1 (N.Virginia). Till this point it made sense, because I see this below error when i execute my python script, as the region is defined as "us-west-1" in aws configure (AWS cli) and also in my python script.
botocore.errorfactory.InvalidParameterException: An error occurred (InvalidParameter) when calling the Publish operation: Invalid parameter: PhoneNumber Reason:
But to test, when I changed the "region" in aws conifgure and in my python script to "us-east-1", my script pushed a text message to my phone. Isn't it weird? Can anyone please explain why this is working just by changing region in AWS cli and in my python script, though my instance is still in us-west-1 and I dont see "Publish text message" option on SNS dashboard on N.california region?
Is redefining the aws cli with us-east-1 similar to deploying a new instance altogether in us-east-1? I dont think so. Correct me if I am wrong. Or is it like having an instance in us-west-1, but just using SNS service from us-east-1? Please shed some light.
Here is my python script, if anyone need to look at it (Its a simple snippet).
import boto3
def send_message():
# Create an SNS client
client = boto3.client("sns", aws_access_key_id="XXXX", aws_secret_access_key="XXXX", region_name="us-east-1")
# Send your sms message.
client.publish(PhoneNumber="XXXX",Message="Hello World!")
if __name__ == '__main__':
send_message()
Is redefining the aws cli with us-east-1 similar to deploying a new
instance altogether in us-east-1?
No, it isn't like that at all.
Or is it like having an instance in us-west-1, but just using SNS
service from us-east-1?
Yes, that's all you are doing. You can connect to any AWS regions' API from anywhere on the Internet. It doesn't matter that it is running on an EC2 instance in a specific region, it only matters what region you tell the SDK/CLI to use.
You could run the same code on your local computer. Obviously your local computer is not running on AWS so you would have to tell the code which AWS region to send the API calls to. What you are doing is the same thing.
Code running on an EC2 server is not limited into using the AWS API in the same region that the EC2 server is in.
Did you try creating a topic before publishing to it? You should try create a topic and then publish to that topic.

AWS SQS --cli-input-json does not recognize attribute FifoQueue

I'm using aws cli 1.11.102 on Windows. The following two commands give me different results:
aws sqs create-queue --cli-input-json "{\"QueueName\":\"JustANormal_name\",\"Attributes\":{\"FifoQueue\":\"false\"}}"
This gives me an error:
An error occurred (InvalidAttributeName) when calling the CreateQueue operation: Unknown Attribute FifoQueue.
However I'm able to create a Fifo queue using
aws sqs create-queue --queue-name "Something.fifo" --attributes "{\"FifoQueue\":\"true\"}"
I've tried passing in other attributes in JSON format and the following line works.
aws sqs create-queue --cli-input-json "{\"QueueName\":\"my_team-std_queue-2\",\"Attributes\":{\"DelaySeconds\":\"10\"}}"
I've also verified I'm using N.Virginia for all the commands above. So I don't think the region is the problem.
--- Edit ---
Follow up with comment of John, putting FifoQueue="true" works fine. This has been added to the bug report and follow-ups go here:
AWS bug report