I have configured a data pipeline which executes a SQL statement and dumps data into a S3 bucket. Everything in the pipeline is working fine. The data is being dumped successfully. Today I added an SNSAlarm to the OnSuccess event on my Activity and subscribed a SQS queue to that SNS topic. However, I do not get any message in the queue even though the Activity succeeds and neither do I see any sort of log related to SNS success or failure.
Has anyone used SnsAlarm in AWS Datapipeline before? Any help would be great.
Yes, you can attach snsAlarms (they are an action of a datapipeline) to Activities as well as the pipeline itself.
{
"id" : "SuccessNotify",
"name" : "SuccessNotify",
"type" : "SnsAlarm",
"topicArn" : "arn:aws:sns:us-east-1:28619EXAMPLE:ExampleTopic",
"subject" : "COPY SUCCESS: #{node.#scheduledStartTime}",
"message" : "Files were copied from #{node.input} to #{node.output}."
}
Be sure to update topicArn with the Arn of the SNS topic you wish to receive alerts at.
More Info: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-snsalarm.html
More Info on Datapipeline Objects: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-pipeline-objects.html
Related
I am trying to set up self-hosted runners for GitHub using Terraform with Phillips-Labs terraform-aws-github-runner module. I see the GH webhook send/receive messages, SQS queue receiving messages and those messages being retrieve. The scale-up lambda is firing and I see the following logs:
2023-01-31 11:50:15.879 INFO [scale-up:22b11002-76d2-5596-9451-4c51746730c2 index.js:119051 scaleUp] Received workflow_job from {my-org}/terraform-aws-github-self-hosted-runners
{}
2023-01-31 11:50:15.880 INFO [scale-up:22b11002-76d2-5596-9451-4c51746730c2 index.js:119084 scaleUp] Received event
{
"runnerType": "Org",
"runnerOwner": "my-org",
"event": "workflow_job",
"id": "11002102910"
}
2023-01-31 11:50:16.188 DEBUG [gh-auth:22b11002-76d2-5596-9451-4c51746730c2 index.js:118486 createAuth] GHES API URL: {"runnerType":"Org","runnerOwner":"my-org","event":"workflow_job","id":"11002102910"}
2023-01-31 11:50:16.193 WARN [scale-runners:22b11002-76d2-5596-9451-4c51746730c2 index.js:118529 Runtime.handler] Ignoring error: error:1E08010C:DECODER routines::unsupported
{
"runnerType": "Org",
"runnerOwner": "my-org",
"event": "workflow_job",
"id": "11002102910"
}
I do not see any EC2 instances being creating. I suspect the GHES API URL: should have a value after it, but I'm not certain. Also, the final log says it is ignoring an error...
I have confirmed my private key pem file is stored as a multi-line secret in secrets manager.
Any advice would be much appreciated!
It looks like not all the permissions needed by the github app are documented. I needed to add a subscription to the Workflow run event.
In AWS IOT, I'm able to publish an MQTT message from a device to the topic "topic/messages" :
{
"id":"messageID",
"value":"messageValue"
}
And I want to "augment" it in the server by adding a timestamp on it and let the message to continue on the SAME TOPIC.
So I'd like that subscribers on "topic/messages" receive this:
{
"id":"messageID",
"value":"messageValue",
"serverTimestamp":"1637867431920" <--- Here
}
However, I don't find the way how to process this message and let it flow in the same topic:
I can add a rule
SELECT * , timestamp() as serverTimestamp FROM 'topic/#'
But the rule does not augment the original message but creates an augmented copy and redirects it to some other service (Lambda, DynamoDB, Republish, etc..)
Those services work with the copy of the given value, but not with the original message, so the subscribers still receives the original sent message.
{
"id":"messageID",
"value":"messageValue"
}
I can republish the message using the same topic BUT as the topic has an attached rule, after the republishing, the rule's action is triggered again and again in a recursive loop)...
All the AWS examples I've read are meant to take the message, transform it , and do some other different thing with it (save in DynamoDB, save in a Bucket, end to Salesforce....) but none of them modify the message to be sent.
So what I'm looking for is a way to receive the message, add a field (or more) to it and let it flow in the same topic .
What is the simplest way to do this?
In AWS IoT core, I set up a rule with a republish action to update a thing's shadow (TestThing's shadow) like this
(I created new IAM role for the action in case you are wondering)
What I was expecting was that the thing's shadow should be updated and nothing should be published to 'testthing/error' when I publish a message to 'testthing/message'. But when I published the following message to 'testthing/message' with AWS IoT MQTT client
{
"state":
{
"reported":
{
"Info":"Hello AWS IoT!"
}
}
}
I got this error from 'testthing/error':
...
"failedAction": "RepublishAction",
"failedResource": "/things/TestThing/shadow/update",
"errorMessage": "Failed to republish to topic. Received Server error. The error code is 403. Message arrived on: testthing/message, Topic: /things/TestThing/shadow/update"
...
If I change the topic to which the message should be republished into 'testthing/destination', everything works fine, no error message was published to 'testthing/error'.
Am I missing something?
$aws/# is a reserved topic.
As per AWS Documentation here:
If you are republishing to a reserved topic, one that begins with $ use $$ instead.
Please replace $ with $$ and try again!
cheers,
ram
I am setting up aws code pipeline based on the git developer branch.
when the developer commits his code, the pipeline will be trigger based on the webhook. Now the idea is when there is a failure in the pipeline, and when the developer triggers a pull-request, the reviewer should know that this is bad branch. He should be able to see the git status of the branch showing there is a failure.
Earlier I have used a build tool called Codeship which has a git-hub app to do this stuff. Now I have gone through git-hub API
https://developer.github.com/v3/checks/runs/#create-a-check-run
But not sure where to start.
To send a notification when stage fails, please follow these steps:
Based on your Cloudwatch events emitted by CodePipeline [0], trigger a lambda function [1].
Lambda function can trigger API call "list-pipeline-executions"[2] and from that you can fetch all the required values like Commit Id, status message etc [3].
Once the values are fetched, you can send same value to SNS by writing lambda function code inside same lambda. Following blogs shows how to publish in SNS using lambda [4][5].
[0] https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-cloudwatch-sns-notifications.html
[1] https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/RunLambdaSchedule.html
[2] https://docs.aws.amazon.com/cli/latest/reference/codepipeline/list-pipeline-executions.html
[3] https://docs.aws.amazon.com/cli/latest/reference/codepipeline/index.html
[4] https://gist.github.com/jeremypruitt/ab70d78b815eae84e037
[5] Can you publish a message to an SNS topic using an AWS Lambda function backed by node.js?
I have done the following to write back the status to gitrepo:
I have made use of gitstatus api:
https://developer.github.com/v3/repos/statuses/
Written a Lambda function to do POST request with these details
state, TargetURL , context
{
"state": "success",
"target_url": "https://example.com/build/status" (Build Tool URL like codebuild or jenkins)
"description": "The build succeeded!",
"context": "continuous-integration/jenkins"
}
the response here should be
"url": "https://api.github.com/repos//-/statuses/6dcb09b5b57875f334f61aebed695e2e4193db5e ",
All these details can be obtained using CLoudwatch event for the pipeline. by using patterns of event detail list :
event.detail.state;event.detail.pipeline;event.region, event ['detail']['execution-id'],
data.pipelineExecution.artifactRevisions[0].revisionUrl
Cheers.
We have a case where we need to send a json object with a push notification. Reading the documentation I found out I can do the following
iOS
{
default: req.body.message,
"APNS": {
"aps": {
"alert": {
"message": req.body.message,
"data": "{JSON Object}"
},
},
}
Android:
{
"GCM": {
"data": {
"messagee": {
"message": req.body.message,
"data": "{JSON Object}"
}
}
}
}
But, I got sceptical if we should use Message Attributes if not then what is the us of the Message Attributes !
Based on your description it seems like you do not need to use message attributes. Quoting the AWS docs:
You can also use message attributes to help structure the push notification message for mobile endpoints. In this scenario the message attributes are only used to help structure the push notification message and are not delivered to the endpoint, as they are when sending messages with message attributes to Amazon SQS endpoints.
There are some use cases for attaching message attributes to push notifications. One such use case is for TTLs on outbound messages. Again quoting the docs:
The TTL message attribute is used to specify expiration metadata about a message. This allows you to specify the amount of time that the push notification service, such as Apple Push Notification Service (APNS) or GCM, has to deliver the message to the endpoint. If for some reason (such as the mobile device has been turned off) the message is not deliverable within the specified TTL, then the message will be dropped and no further attempts to deliver it will be made. To specify TTL within message attributes, you can use the AWS Management Console, AWS software development kits (SDKs), or query API.