We have a GCS bucket named 'testfiles' and Pub/Sub topic 'testtopic' with a subscription 'testsubscription'. We have created a notification configuration to receive notifications on the Pub/Sub topic for any event happening on the GCS bucket. When we run the following command to list the notifications on the bucket,
gcloud storage buckets notifications list gs://testfiles
we see the below output,
{
"kind": "storage#notification",
"selfLink": "https://www.googleapis.com/storage/v1/b/testfiles/notificationConfigs/28",
"id": "28",
"topic": "//pubsub.googleapis.com/projects/test-project/topics/testtopic",
"etag": "28",
"payload_format": "JSON_API_V1"
}
Also, we have provided the Cloud Storage Service account the Pub/Sub Publisher role.
Despite these settings, when we upload a file to the bucket 'testfiles', we do not see any JSON messages in the above mentioned topic/subscription (testtopic/testsubscription).
We tried to follow the documentation here
Please advise, if there is something we are missing.
The console (GCP console, console.cloud.google.com) was not displaying the Pub/Sub messages. However, when we connect to Pub/Sub topic's subscription through a listener code, it was able to receive the JSON messages without any issues.
Related
I'm attempting to set up a Cloudwatch Event Rule to notify on any AWS IAM actions like DeleteUser or CreateUser. But when I tried to create an event pattern I couldn't find IAM in the service Name list even though when I searched in the AWS documentation i cant's find a mention of IAM not being supported by Cloudwatch event rules. So I tried to create a custom event but i didn't receive any email from SNS (my target), and yes I made sure cloudwatch has permissions to invoke SNS as we already have other working events, any idea on why this is not working ?
{
"source":[
"aws.iam"
],
"detail-type":[
"AWS API Call via CloudTrail"
],
"detail":{
"eventSource":[
"iam.amazonaws.com"
],
"eventName":[
"CreateUser",
"DeleteUser"
]
}
}
I figure it out, IAM emits cloudtrail events only in us-eas-1 and I'm using a different region, it worked when I created the Cloudwatch event in N. Virgenia
The source parameter needs to be "aws.cloudtrail" not "aws.iam".
IAM policy is a global service. It can only report in US-East-1(N.Virginia).
I have same exact config and the region is same as well but creating a new user still don't trigger the event as there is event in clouldtrail as well as in the monitoring of the event rule created. I see that they say in document that cloudtrail has to be enabled but when I create a rule for security group modification which is ec2 events then it is working fine but not with iam one. Is there any permission that I am missing for aws events to send logs to clould trail , if so how did you guys resolved it.
I have trying to publish sms from AWS SNS console. It show a success result. But the message is not geting.
Every requests were noted as failure in the console
The response when i publish text message :
SMS message published to phone number +91XXXXXXXXXX successfully.
Message "ID": e3d2bc39-2792-5b2e-adcc-e4733a800795
I was facing the same issue and found I need to generate a support ticket to use SNS SMS.
Below is link for generating supporting ticket, explain your use case for SNS SMS
SNS support ticket link
You can activate Delivery status logging.
From Viewing Amazon CloudWatch metrics and logs for SMS deliveries - Amazon Simple Notification Service:
On the Text messaging (SMS) page, in the Text messaging preferences section, choose Edit.
On the Edit text messaging preferences page, in the Delivery status logging section, do the following:
Sample rate: 100%
Service role: Create a new service role (or choose an existing one if it is there)
You can then send an SMS direct from the Text message (SMS) page. It will show a Delivery Statistics graph to indicate success/failure.
Also, for each message, there will be a log entry in Amazon CloudWatch Logs (go to CloudWatch / Logs / then choose the SNS log). It will look similar to this:
{
"notification": {
"messageId": "xxx",
"timestamp": "2020-12-09 08:40:19.536"
},
"delivery": {
"phoneCarrier": "Optus Mobile Pty Ltd",
"mnc": 2,
"numberOfMessageParts": 1,
"destination": "+61455555555",
"priceInUSD": 0.03809,
"smsType": "Promotional",
"mcc": 505,
"providerResponse": "Message has been accepted by phone carrier",
"dwellTimeMs": 524,
"dwellTimeMsUntilDeviceAck": 2453
},
"status": "SUCCESS"
}
This log will give you the most detail of whether an SMS was sent to the phone carrier, so that you can determine where it might be failing.
I am not able to delete these subscriptions attached to the CloudWatch Logs Groups.
These subscriptions are created by CloudFormation stack via Serverless Framework. However, when I finished testing and deployed to the template, there was a permission error during the cleanup. Hence, these subscriptions became dangled and I am not able to locate it.
Tried with CLI and seems no relevant info regarding that.
$ aws logs describe-log-groups --log-group-name-prefix yyy
{
"logGroups": [
{
"logGroupName": "yyy",
"creationTime": 1555604143719,
"retentionInDays": 1,
"metricFilterCount": 0,
"arn": "arn:aws:logs:us-east-1:xxx:log-group:yyy:*",
"storedBytes": 167385869
}
]
}
Select the Log Group using the radio button on the left of the Log Group name. Then click Actions, Remove Subscription Filter.
Via CLI is listed in AWS document => This link
Via Console UI -> This capture
As you created the subscription with cloudformation stack via serverless, manually removing the subscription filter as jarmod is not a best practice.
What you should do is remove the cloudwatchLog event from the lambda functions and deploy, it should remove the subscriptions.
I am working on Amazon SES with SQS to receive the bounce list of the email. For security reason, I am only given the information that necessary to connect to the SES and SQS service (host name, API keys, etc), so I am not able to use the AWS console to see the status of the queue. This is reasonable as I don't want to mess with many other services that are under the same account - especially when the services are not free. However, as the job is added to SQS by SES, I would need a way to see what's in SQS, so as to know if the bug is because the job is not inside SQS or simply because my code failed to retrieve the job.
So, are there tools that I can view the SQS status when I don't have access to AWS console?
Yes, you can use the AWS CLI (https://aws.amazon.com/cli/) to view basic information about the queue:
For example:
aws sqs get-queue-attributes --queue-url https://sqs.us-east-1.amazonaws.com/99999999/HBDService-BackgroundTaskQueue --attribute-names All
will show you this:
{
"Attributes": {
"LastModifiedTimestamp": "1522235654",
"ApproximateNumberOfMessages": "7",
"ReceiveMessageWaitTimeSeconds": "20",
"CreatedTimestamp": "1522235629",
"ApproximateNumberOfMessagesDelayed": "0",
"QueueArn": "arn:aws:sqs:us-east-1:999999999:HBDService-BackgroundTaskQueue",
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:999999999:HBDService-BackgroundTaskQueue-DLQ\",\"maxReceiveCount\":100}",
"MaximumMessageSize": "262144",
"DelaySeconds": "0",
"ApproximateNumberOfMessagesNotVisible": "0",
"MessageRetentionPeriod": "1209600",
"VisibilityTimeout": "180"
}
}
I set up my Lambda function according to the AWS guides by setting a trigger in the setup stage. (the guide except that the guide is using IoT button and I'm using a rule)
It sets up the trigger rule in the AWS IoT console for me. The thing is setup with a certificate and an "iot:*" policy which gives it full IoT access.
The thing is continuously sending messages to the cloud under a certain topic. The messages can be received if I subscribe to it in the AWS IoT Test console.
My lambda function gets triggered if I publish something under that topic from the AWS IoT Test console.
But the function doesn't trigger from the continuous messages sent by the thing. It only triggers from the IoT Test console.
I didn't add any other policy under certificates for the thing in relation to this trigger. Do I have to do so? What should it be?
I tried changing my topic SQL to SELECT * FROM '*'
Try to change your SQL to SELECT * FROM '#'. With # you get every published topic. When you use *, then you don't get topics e.g. sample/newTopic.
With this SQL statement the Lambdas Function gets invoked for every incoming message. When the AWS IoT Console shows the message and your Lambda Function doesn't do anything, try to look if Lambda did a log in CloudWatch.
If your AWS IoT Thing can't trigger AWS Lambda function, you may have a JSON mapping issue and also to improve your SQL query. In my case, I used the following code to provide Lambda a clean input:
SELECT message.reported.* from "#"
With JSON mapping:
{
"desired": {
"light": "green",
"Temperature": "55",
"timestamp": 1526323886
},
"reported": {
"light": "blue",
"Temperature": "55",
"timestamp": 1526323886
},
"delta": {
"light": "green"
}
}
Then you analyze CloudWatch logs:
Then, check your AWS IoT Console for shadow updates (green below - "Atualizações de sombra") and also Publications (orange)
So, your solution will look like this:
For full details of an end-to-end implementation of AWS IoT using Lambda, please access:
IoT Project - CPU Temperature from Ubuntu to AWS IoT