AWS Lambda Publish Messages to SNS in cross region - amazon-web-services

I have a Lambda function (east-us-1) that needs to publish messages to SNS topics in both (east-us-1) & (eu-central-1) regions. Is this possible?
Here is my code snippet. Can someone help on how can I achieve this?
I am surprised to see that the boto3 client documentation says "You can publish messages only to topics and endpoints in the same Amazon Web Services Region"
Link Here:
from __future__ import print_function
import json
import urllib
import boto3
print('Loading message function...')
def send_to_sns(message, context):
# This function receives JSON input with three fields: the ARN of an SNS topic,
# a string with the subject of the message, and a string with the body of the message.
# The message is then sent to the SNS topic.
#
# Example:
# {
# "topic": "arn:aws:sns:REGION:123456789012:MySNSTopic",
# "subject": "This is the subject of the message.",
# "message": "This is the body of the message."
# }
sns = boto3.client('sns')
sns.publish(
TopicArn=message['topic'],
Subject=message['subject'],
Message=message['body']
)
return ('Sent a message to an Amazon SNS topic.')
My code throws the following error when SNS is in different region
{
"response": {
"stackTrace": [
[
"/var/task/lambda_function.py",
28,
"send_to_sns",
"Message=message['body']"
],
[
"/var/runtime/botocore/client.py",
357,
"_api_call",
"return self._make_api_call(operation_name, kwargs)"
],
[
"/var/runtime/botocore/client.py",
676,
"_make_api_call",
"raise error_class(parsed_response, operation_name)"
]
],
"errorType": "InvalidParameterException",
"errorMessage": "An error occurred (InvalidParameter) when calling the Publish operation: Invalid parameter: TopicArn"
}
}

you are facing this issue because of region conflict with below code you can check
your current region
my_session = boto3.session.Session()
my_region = my_session.region_name
and to solve your problem just do this i am assuming you are connected to us-east-1 and your sns is present in us-east-2 region
sns = boto3.client('sns',region_name='us-east-2')

Related

AWS Chatbot and EventBridge for Glue Job State Changes Error - Event received is not supported

I am trying to set up the AWS Chatbot with Slack integrations to display error messages for changes in states (errors) for AWS Glue. I have set up AWS EventBridge event pattern to catch Glue Job State Changes as follows:
{
"source": ["aws.glue"],
"detail-type": ["Glue Job State Change"],
"detail": {
"state": [
"FAILED"
]
}
}
This successfully catches all failed Glue Jobs and I have set up an AWS SNS topic as the target using the input transformer.
Input Transformer Input Path
{"jobname":"$.detail.jobName","jobrunid":"$.detail.jobRunId","jobstate":"$.detail.state"}
Input Transformer Input Template
"{\"detail-type\": \"Glue Job <job-name> has entered the state <job-state> with the message <message>.\"}"
AWS SNS has a subscriptions endpoint to the AWS Chatbot which fails to send the notification to Slack.
AWS Chatbot CloudWatch logs after an event using Input Transformer
Event received is not supported (see https://docs.aws.amazon.com/chatbot/latest/adminguide/related-services.html ):
{
"subscribeUrl": null,
"type": "Notification",
"signatureVersion": "1",
"signature": <signature>,
"topicArn": <topic-arn>,
"signingCertUrl": <signing-cert-url>,
"messageId": <message-id>,
"message": "{\"detail-type\": \"Glue Job MyJob has entered the state FAILED with the message SystemExit: None.\"}",
"subject": null,
"unsubscribeUrl": <unsubscribe-url>,
"timestamp": "2022-03-02T12:17:16.879Z",
"token": null
}
When the input is set to 'Matched Events' in the AWS EventBridge Select Target, the Slack Notification will send however it lacks any details.
Slack Notification
Glue Job State Change | eu-west-1 | Account: <account>
Glue Job State Change
AWS EventBridge Matched Events JSON Output
{
"Type" : "Notification",
"MessageId" : <message-id>,
"TopicArn" : <topic-arn>,
"Message" : "{\"detail-type\": [\"Glue Job State Change\"]}",
"Timestamp" : "2022-03-02T11:17:52.443Z",
"SignatureVersion" : "1",
"Signature" : <signature>,
"SigningCertURL" : <signing-cert-url>,
"UnsubscribeURL" : <unsubscribe-url>
}
There are very little differences between the two JSON outputs however the input transformer is considered an unsupported event. Is it possible to generate a custom message when using the AWS Chatbot for errors?
The best solution was to create a Lambda function as the target of the AWS EventBridge which performs a POST to a Slack Webhook.
# Import modules
import logging
import json
import urllib3
# Set up logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# Define Lambda function
def lambda_handler(event, context):
http = urllib3.PoolManager()
url = <url>
link = <glue-studio-monitoring-link>
message = f"A Glue Job {event['detail']['jobName']} with Job Run ID {event['detail']['jobRunId']} has entered the state {event['detail']['state']} with error message: {event['detail']['message']}. Visit the link for job monitoring {link}"
logger.info(message)
headers = {"Content-type": "application/json"}
data = {'text': message}
response = http.request('POST',
url,
body = json.dumps(data),
headers = headers,
retries = False)
logger.info(response.status)

How to get a glue crawler event state?

I am following this doc https://aws.amazon.com/premiumsupport/knowledge-center/start-glue-job-run-end/ to setup an auto-trigger on lambda when crawler finishes. The event pattern I set on cloudwatch is:
{
"detail": {
"crawlerName": [
"reddit_movie"
],
"state": [
"Succeeded"
]
},
"detail-type": [
"Glue Crawler State Change"
],
"source": [
"aws.glue"
]
}
And I add a lambda function as target for this rule in cloudwatch.
I manually trigger the crawler but it doesn't trigger the lambda after it finished. From the crawler log I can see:
04:36:28
[6c8450a5-970a-4190-bd2b-829a82d67fdf] INFO : Table redditmovies_bb008c32d0d970f0465f47490123f749 in database video has been updated with new schema

04:36:30
[6c8450a5-970a-4190-bd2b-829a82d67fdf] BENCHMARK : Finished writing to Catalog

04:37:37
[6c8450a5-970a-4190-bd2b-829a82d67fdf] BENCHMARK : Crawler has finished running and is in state READY
Does above log mean crawler finished successfully? How do I know why the lambda function is not triggered from crawler?
And how I can debug this issue? which log should i look at?
Following works -
Cloudwatch Event Rule -
{
"source": [
"aws.glue"
],
"detail-type": [
"Glue Crawler State Change"
],
"detail": {
"state": [
"Succeeded"
]
}
}
Sample lambda -
def lambda_handler(event, context):
try:
if event and 'detail' in event and event['detail'] and 'crawlerName' in event['detail']:
crawler_name = event['detail']['crawlerName']
print('Received event from crawlerName - {0}'.format(crawler_name))
crawler = glue.get_crawler(Name=crawler_name)
print('Received crawler from glue - {0}'.format(str(crawler)))
database = crawler['Crawler']['DatabaseName']
except Exception as e:
print('Error handling events from crawler. Details - {0}'.format(e))
raise e
Here is screenshot -
At first, I follow the link https://aws.amazon.com/premiumsupport/knowledge-center/start-glue-job-run-end/ and it doesn't work. I found it is due to the python script lambda in the link is not correct if you paste it directly. Please have a check of your lambda.
The python lambda copied from link
import boto3
client = boto3.client('glue')
def lambda_handler(event, context):
response = client.start_job_run(JobName = 'MyTestJob')
We need to fix it as below:
import boto3
client = boto3.client('glue')
def lambda_handler(event, context):
response = client.start_job_run(JobName = 'MyTestJob')

What IAM role should be assigned to aws lambda function so that it can get the emr cluster status

I've prepared a simple lambda function in AWS to terminate long running EMR clusters after a certain threshold is reached. This code snippet is tested locally and is working perfectly fine. Now I pushed it into a lambda, took care of the library dependencies, so that's also fine. This lambda is triggered from a CloudWatch rule, which is a simple cron schedule. I'm using an existing IAM rule which has these 7 policies attached to it.
SecretsManagerReadWrite
AmazonSQSFullAccess
AmazonS3FullAccess
CloudWatchFullAccess
AWSGlueServiceRole
AmazonSESFullAccess
AWSLambdaRole
I've configured the lambda to be inside the same vpc and security group as that of the emr(s). Still I'm getting this error consistently:
An error occurred (AccessDeniedException) when calling the ListClusters operation: User: arn:aws:sts::xyz:assumed-role/dev-lambda-role/terminate_inactive_dev_emr_clusters is not authorized to perform: elasticmapreduce:ListClusters on resource: *: ClientError
Traceback (most recent call last):
File "/var/task/terminate_dev_emr.py", line 24, in terminator
ClusterStates=['STARTING', 'BOOTSTRAPPING', 'RUNNING', 'WAITING']
File "/var/runtime/botocore/client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 612, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the ListClusters operation: User: arn:aws:sts::xyz:assumed-role/dev-lambda-role/terminate_inactive_dev_emr_clusters is not authorized to perform: elasticmapreduce:ListClusters on resource: *
My lambda function looks something like this:
import pytz
import boto3
from datetime import datetime, timedelta
def terminator(event, context):
''' cluster lifetime limit in hours '''
LIMIT = 7
TIMEZONE = 'Asia/Kolkata'
AWS_REGION = 'eu-west-1'
print('Start cluster check')
emr = boto3.client('emr', region_name=AWS_REGION)
local_tz = pytz.timezone(TIMEZONE)
today = local_tz.localize(datetime.today(), is_dst=None)
lifetimelimit = today - timedelta(hours=LIMIT)
clusters = emr.list_clusters(
CreatedBefore=lifetimelimit,
ClusterStates=['STARTING', 'BOOTSTRAPPING', 'RUNNING', 'WAITING']
)
if clusters['Clusters'] is not None:
for cluster in clusters['Clusters']:
description = emr.describe_cluster(ClusterId=cluster['Id'])
if(len(description['Cluster']['Tags']) == 1
and description['Cluster']['Tags'][0]['Key'] == 'dev.ephemeral'):
print('Terminating Cluster: [{id}] with name [{name}]. It was active since: [{time}]'.format(id=cluster['Id'], name=cluster['Name'], time=cluster['Status']['Timeline']['CreationDateTime'].strftime('%Y-%m-%d %H:%M:%S')))
emr.terminate_job_flows(JobFlowIds=[cluster['Id']])
print('cluster check done')
return
Any help is appreciated.
As error message indicates, lambda does not have permissions to call ListClusters on EMR. As you are working with EMR clusters and would also like to terminate the clusters, you should give lambda function proper IAM role which is having that capability to do that. Create a new IAM policy from AWS console (say EMRFullAccess). here is how it looks like
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "elasticmapreduce:*",
"Resource": "*"
}
]
}
After creating policy, create a new role from AWS console with lambda as service and attach newly created policy above. After that, attach this role to your lambda function. That should solve issue :-)

Get emails whenever a file is uploaded on s3 bucket using serverless

i want to get emails whenever a file is uploaded to s3 bucket as described in the title above, i am using serverless, the issue is that the event that i have created on s3 gives me just notification on s3-aws console, and don't know how to configure cloudwatch event on S3 to trigger lambda. So please if someone knows how to triggered events on S3 using cloudwatch i am all ears.
Here is my code:
import json
import boto3
import botocore
import logging
import sys
import os
import traceback
from botocore.exceptions import ClientError
from pprint import pprint
from time import strftime, gmtime
email_from = '*****#******.com'
email_to = '******#******.com'
#email_cc = '********#gmail.com'
email_subject = 'new event on s3 '
email_body = 'a new file is uploaded'
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)
from botocore.exceptions import ClientError
def sthree(event, context):
"""Send email whenever a file is uploaded to S3"""
body = {}
status_code = 200
try:
s3 = boto3.client('s3')
ses = boto3.client('ses')
response = ses.send_email(Source = email_from,
Destination = {'ToAddresses': [email_to,],},
Message = {'Subject': {'Data': email_subject}, 'Body':{'Text' : {'Data': email_body}}}
)
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
and here is my serverless.yml file
service: aws-python # NOTE: update this with your service name
plugins:
- serverless-external-s3-event
provider: name: aws
runtime: python2.7
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- s3:*
- "ses:SendEmail"
- "ses:SendRawEmail"
- "s3:PutBucketNotification"
Resource: "*"
functions: sthree:
handler: handler.sthree
description: send mail whenever a file is uploaded on S3
events:
- s3:
bucket: cartegie-nirmine
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
- suffix: .jpg
- cloudwatchEvent:
description: 'CloudWatch Event triggered '
event:
source:
- "aws.S3"
detail-type:
- "S3 event Notification"
enabled : true
If your motto is just to receive email notification of operations on a S3 bucket, then you dont need lambda functions for that. For the use-case mentioned in the question, you can achieve that using SNS topic and S3 events. I will mention the steps to follow from console(through the same can be achieved via sdk or cli).
1) Create a Topic using SNS console.
2) Subscribe to the topic. Use email as the communications protocol and provide your email-id.
3) You will get email requesting you to confirm your subscription to the topic. Confirm the subscription.
4) IMPORTANT: Replace the access policy of the topic with the below policy:
{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__default_statement_ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "SNS:Publish",
"Resource": "sns-topic-arn",
"Condition": {
"ArnLike": {
"aws:SourceArn": "arn:aws:s3:*:*:s3-bucket-name"
}
}
}
]
}
Basically you are giving permission for your s3-bucket to publish to the SNS topic.
Replace sns-topic-arn with your ARN of the topic you created above.
Replce s3-bucket-name with your bucket name, for which you want to receive notifications.
5) Go to S3 Console. Click on your S3 bucket and open the Properties tab.
6) Under Advanced settings, Click on Events Card.
7) Click Add Notifications and enter values. A sample has been shown below.
Select the required s3-events to monitor and the SNS topic you created.
8) Click Save. Now you should start receiving notifications to your email.

AWS Lambda Instance Start error

So im trying to create a lambda function that will trigger when an S3 PUT occurs for now it seems to be executing the code but im getting a problem
import boto3
def lambda_handler(event, context):
instances = ["i-0b5d926"]
region = 'ap-south-1a'
ec2 = boto3.client('ec2',, region_name=region)
ec2.instances.filter(InstanceIds=instances ).start()
im getting this error
{
"stackTrace": [
[
"/var/task/index.py",
9,
"lambda_handler",
"return ec2.instances.filter(InstanceIds=instaces).start()"
],
[
"/var/runtime/botocore/client.py",
509,
"__getattr__",
"self.__class__.__name__, item)"
]
],
"errorType": "AttributeError",
"errorMessage": "'EC2' object has no attribute 'instances'"
}
Any help would be appreciated
ON A Side Note THE IAMROLE has full Access (S3,EC2,Lambda) will be configured later for specific purpose
ok i understood the error
instance was null
i changed
ec2.instances.filter(InstanceIds=instances ).start()
to
ec2.start_instances(InstanceIds=instances)
and started