We have a Lambda function that sends messages to SQS queue.
We are using boto3.
We have built a new environment and Lambda is running in a VPC on a private subnet.
The VPC end point is com.amazonaws.eu-west-2.sqs
Lambda code
sqs = boto3.resource('sqs')
# Get the queue
queue = sqs.get_queue_by_name(QueueName=QueueID)
This gives us the following error
EndpointConnectionError: Could not connect to the endpoint URL: "https://eu-west-2.queue.amazonaws.com/"
We have tried the following change
sqs = boto3.client('sqs')
# Get the queue
queue = sqs.get_queue_url(QueueName=QueueID, QueueOwnerAWSAccountId='xxxxxxxxxxxx')
We get the same error
It is a legacy endpoint issue but we do not know how to use the new endpoints in the Lambda function.
Because you're using a VPC endpoint for SQS you need to override the address that boto3 is using by default.
Something like this:
sqs = boto3.resource('sqs', endpoint_url="https://com.amazonaws.eu-west-2.sqs")
Related
I am working on an task which involves Lambda function running inside VPC.
This function is supposed to push messages to SQS and lambda execution role has policies : AWSLambdaSQSQueueExecutionRole and AWSLambdaVPCAccessExecutionRole added.
Lambda functions :
# Create SQS client
sqs = boto3.client('sqs')
queue_url = 'https://sqs.ap-east-1a.amazonaws.com/073x08xx43xx37/xyz-queue'
# Send message to SQS queue
response = sqs.send_message(
QueueUrl=queue_url,
DelaySeconds=10,
MessageAttributes={
'Title': {
'DataType': 'String',
'StringValue': 'Tes1'
},
'Author': {
'DataType': 'String',
'StringValue': 'Test2'
},
'WeeksOn': {
'DataType': 'Number',
'StringValue': '1'
}
},
MessageBody=(
'Testing'
)
)
print(response['MessageId'])
On testing the execution result is as :
{
"errorMessage": "2020-07-24T12:12:15.924Z f8e794fc-59ba-43bd-8fee-57f417fa50c9 Task timed out after 3.00 seconds"
}
I increased the Timeout from Basic Settings to 5 seconds & 10
seconds as well. But the error kept coming.
If anyone has faced similar issue in past or is having an idea how to get this resolved, Please help me out.
Thanks you in advance.
When an AWS Lambda function is configured to use an Amazon VPC, it connects to a nominated subnet of the VPC. This allows the Lambda function to communicate with other resources inside the VPC. However, it cannot communicate with the Internet. This is a problem because the Amazon SQS public endpoint lives on the Internet and the function is timing-out because it is unable to reach the Internet.
Thus, you have 3 options:
Option 1: Do not connect to a VPC
If your Lambda function does not need to communicate with a resource in the VPC (such as the simple function you have provided above), simply do not connect it to the VPC. When a Lambda function is not connected to a VPC, it can communicate with the Internet and the Amazon SQS public endpoint.
Option 2: Use a VPC Endpoint
A VPC Endpoint provides a means of accessing an AWS service without going via the Internet. You would configure a VPC endpoint for Amazon SQS. Then, when the Lambda function wishes to connect with the SQS queue, it can access SQS via the endpoint rather than via the Internet. This is normally a good option if the Lambda function needs to communicate with other resources in the VPC.
Option 3: Use a NAT Gateway
If the Lambda function is configured to use a private subnet, it will be able to access the Internet if a NAT Gateway has been provisioned in a public subnet and the Route Table for the private subnet points to the NAT Gateway. This involves extra expense and is only worthwhile if there is an additional need for a NAT Gateway.
If you're using the boto3 python library in a lambda in a VPC, and it's failing to connect to an sqs queue through a vpc endpoint, you must set the endpoint_url when creating the sqs client. Issue 1900 describes the background behind this.
The solution looks like this (for an sqs vpc endpoint in us-east-1):
sqs_client = boto3.client('sqs',
endpoint_url='https://sqs.us-east-1.amazonaws.com')
Then call send_message or send_message_batch as normal.
You need to place your lambda inside your VPC then set up a VPC endpoint for SQS or NAT gateway, When you add your lambda function to a subnet, make sure you ONLY add it to the private subnets, otherwise nothing will work.
Reference
https://docs.aws.amazon.com/lambda/latest/dg/vpc.html
https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/
I am pretty convinced that you cannot call an SQS queue from within a VPC using Lambda using an SQS endpoint. I'd consider it a bug, but maybe the Lambda team did this for a reason. In any case, You will get a message timeout. I cooked up a simple test Lambda
import json
import boto3
import socket
def lambda_handler(event, context):
print('lambda-test SQS...')
sqsDomain='sqs.us-west-2.amazonaws.com'
addr1 = socket.gethostbyname(sqsDomain)
print('%s=%s' %(sqsDomain, addr1))
print('Creating sqs client...')
sqs = boto3.client('sqs')
print('Sending Test Message...')
response = sqs.send_message(
QueueUrl='https://sqs.us-west-2.amazonaws.com/1234567890/testq.fifo',
MessageBody='Test SQS Lambda!',
MessageGroupId='test')
print('SQS send response: %s' % response)
return {
'statusCode': 200,
'body': json.dumps(response)
}
I created a VPC, subnet, etc per - Configuring a Lambda function to access resources in a VPC. The EC2 instance in this example has no problem invoking SQS through the private endpoint from the CLI per this tutorial.
If I drop my simple Lambda above into the same VPC and subnet, with SQS publishing permissions etc. and invoke the test function it will properly resolve the IP address of the SQS endpoint within the subnet, but the call will timeout (making sure your Lambda timeout is more than 60 seconds to let boto fail). Enabling boto debug logging further confirms that the IP is resolved correctly and the HTTP request to SQS times out.
I didn't try this with a non-FIFO queue but as the HTTP call is failing on connection request this shouldn't matter. It's got to be a routing issue from the Lambda as the EC2 in the same subnet works.
I modified my simple Lambda and added an SNS endpoint and did the same test which worked. The issue issue appears to be specific to SQS best I can tell.
import json
import boto3
import socket
def testSqs():
print('lambda-test SQS...')
sqsDomain='sqs.us-west-2.amazonaws.com'
addr1 = socket.gethostbyname(sqsDomain)
print('%s=%s' %(sqsDomain, addr1))
print('Creating sqs client...')
sqs = boto3.client('sqs')
print('Sending Test Message...')
response = sqs.send_message(
QueueUrl='https://sqs.us-west-2.amazonaws.com/1234567890/testq.fifo',
MessageBody='Test SQS Lambda!',
MessageGroupId='test')
print('SQS send response: %s' % response)
return {
'statusCode': 200,
'body': json.dumps(response)
}
def testSns():
print('lambda-test SNS...')
print('Creating sns client...')
sns = boto3.client('sns')
print('Sending Test Message...')
response = sns.publish(
TopicArn='arn:aws:sns:us-west-2:1234567890:lambda-test',
Message='Test SQS Lambda!'
)
print('SNS send response: %s' % response)
return {
'statusCode': 200,
'body': json.dumps(response)
}
def lambda_handler(event, context):
#return testSqs()
return testSns()
I think your only options are NAT (per John above), bounce your calls off a local EC2 (NAT will be simpler, cheaper, and more reliable), or use a Lambda proxy outside the VPC. Which someone else suggested in a similar post. You could also subscribe an SQS queue to an SNS topic (I prototyped this and it works) and route it out that way too, but that just seems silly unless you absolutely have to have SQS for some obscure reason.
I switched to SNS. I was just hoping to get some more experience with SQS. Hopefully somebody can prove me wrong, but I call it a bug.
we have an AWS cloudformation template, through which we are creating the Amazon MSK(Kafka) cluster. which is working fine.
Now we have multiple applications in our product stack which consume the Brokers endpoints which is created by the Amazon MSK. now to automate the product deployment we decided to create a Route53 recordset for the MSK broker endpoints. we are having hard time finding how we can get a broker endpoints of MSK cluster as an Outputs in AWS Cloudformation templates.
looking forward for suggestion/guidance on this.
Following on #joinEffort answer, this how i did it using custom resources as the CFN resource for an MKS::Cluster does not expose the broker URL:
(option 2 is using boto3 and calling AWS API
The description of the classes and mothods to use from CDK custom resource code can be found here:
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Kafka.html#getBootstrapBrokers-property
Option 1: Using custom resource:
def get_bootstrap_servers(self):
create_params = {
"ClusterArn": self._platform_msk_cluster_arn
}
get_bootstrap_brokers = custom_resources.AwsSdkCall(
service='Kafka',
action='getBootstrapBrokers',
region='ap-southeast-2',
physical_resource_id=custom_resources.PhysicalResourceId.of(f'connector-{self._environment_name}'),
parameters = create_params
)
create_update_custom_plugin = custom_resources.AwsCustomResource(self,
'getBootstrapBrokers',
on_create=get_bootstrap_brokers,
on_update=get_bootstrap_brokers,
policy=custom_resources.AwsCustomResourcePolicy.from_sdk_calls(resources=custom_resources.AwsCustomResourcePolicy.ANY_RESOURCE)
)
return create_update_custom_plugin.get_response_field('BootstrapBrokerString')
Option 2: Using boto3:
client = boto3.client('kafka', region_name='ap-southeast-2')
response = client.get_bootstrap_brokers(
ClusterArn='xxx')
#From here u can get the broker urls:
json_response = json.loads(json.dumps(response))
Reff: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/kafka.html#Kafka.Client.get_bootstrap_brokers
You should be able to get it from below command. More can be found here.
aws kafka get-bootstrap-brokers --region us-east-1 --cluster-arn ClusterArn
I'm trying to notify an SNS topic from a CloudWatch alarm that's in a different region. The reason is that I want SMS alerting, which isn't available in the region where my services are. If I enter the ARN of the subscription and save the changes in the console, I get "There was an error saving the alarm. Please try again." Trying again does not help. Using a topic in the local region does work, but that's not what I need.
Is there a way to notify a topic in a different region? If not, is there another easy way I can achieve my goal?
Didn't find any docs that explicitly say this can't be done but tried to set an SNS from us-east-1 as an action of an alarm in eu-west-1 using the CLI and I got this:
An error occurred (ValidationError) when calling the PutMetricAlarm operation: Invalid region us-east-1 specified. Only eu-west-1 is supported.
So I'll assume it's not supported.
To get the functionality you need you can use AWS Lambda. Lets say your service is in a region where SMS is not supported, I'll use eu-central-1 as an example.
Setup would go like this:
[us-east-1] Create your SNS topic that can send SMS messages, in the region where SMS is supported.
[eu-central-1 Create a lambda function that sends messages to the SNS topic from step 1 in the region where your service is.
[eu-central-1] Create an SNS topic in the region where your service is.
For the SNS topic configure subscription with AWS Lambda protocol and point it to lambda from step 2.
[eu-central-1] Create your alarm in the region where your service is and put the SNS topic from step 3 as an action.
To add to #Tartaglia's answer, here's the source of such a lambda function using Python 3, cobbled together from various sources because I don't have time to do it properly:
import json
import logging
import boto3
logger = logging.getLogger()
logger.setLevel(logging.INFO)
session = boto3.Session(region_name='eu-west-1') # EU (Ireland)
sns_client = session.client('sns')
def lambda_handler(event, context):
logger.info('Received event: %s', event)
for record in event['Records']:
sns_message = record['Sns']
response = sns_client.publish(
TopicArn='YOUR TOPIC ARN HERE',
Subject=sns_message.get('Subject', None),
Message=sns_message.get('Message', None))
logger.info('Publish response: %s', response)
return 'OK'
How we can access SQS queue using "AmazonSQSClient" without using AcccessKey and SecretKey? Is there any options or code sample which use Role instead of AccessKey and SecretKey. We are trying to access SQS queue in our AWS environment where Lambda has Role assigned who has access to SQS and we don;t allow to use AccessKey and SecretKey. How to achieve this? Any idea>
I am using Lambda function and AWSSDK.SQS Nuget Package to work with AWS SQS Queues for sending , reading and deleting messages.
So if your lambda function running using IAM Role which has permission to access (read, write, delete = full) then without providing AccessKey and SecretKey you can access sqs queue. I have done this in my project recently.
https://ramanisandeep.wordpress.com/2018/03/10/amazon-simple-queue-service-sqs/
Note: You need to use ProxyHost and ProxyPort if you are running your Lambda function in restricted environment.
i.e
_sqsConfig = new AmazonSQSConfig
{
ProxyHost = proxyHost,
ProxyPort = proxyPort,
ServiceURL = queueUrl,
RegionEndpoint = RegionEndpoint.USEast1
};
I'm writing to SQS using boto as following.
I got the Message object response from print.
<boto.sqs.message.Message object at 0x102dd4790>
But I'm not seeing that message inside AWS SQS website?
SQS is region specific. So using the following code to initialise sqs fixes this.
sqs = boto.sqs.connect_to_region("ap-southeast-2", aws_access_key_id=AWS_ACCESS_KEY_ID, aws_secret_access_key=AWS_SECRET_ACCESS_KEY)