Unable to connect to AWS DAX - amazon-web-services

I am getting this error when I execute my lambda, it
raises DaxClientError('Failed to configure cluster endpoints from {}'.format(seeds), DaxErrorCode.NoRoute)
I am trying to connect to my DAX cluster from Amazon Lambda (written in python).
I have installed amazon-dax-client to a folder, placed my lambda file there, made package and uploaded as zip file when i test lambda it throws above error.

Your lamda function needs to be in the same VPC as the DAX cluster. Otherwise, it won't be able to connect.

There is a blog post on using Amazon DAX from AWS Lambda. The sample code is for NodeJS but the Lambda/VPC configuration applies no matter what language is used.

Don't forget to add to your "security groups" inbound rules to allow TCP port 8111

Related

Unable to connect to AWS/RDS from Lambda

I have a Node.js Express app that uses Sequelize to connect to the database. I want to deploy my app on Lambda (with API Gateway) and use an RDS Postgres database (serverless)
I created an RDS instance and a server-less setup. From an EC2 instance, I am able to connect to both the RDS instance and the server-less DB without any issues.
However, when I deploy the same code on Lambda, I am unable to connect to either DB instance. In fact, I do not see any error messages anywhere.
sequelize = new Sequelize(process.env.POSTGRES_DBNAME, process.env.POSTGRES_USERNAME, process.env.POSTGRES_PASSWORD, {
host: process.env.POSTGRES_HOST,
dialect: 'postgres',
logging: false,
operatorsAliases: false
});
// Test connection
(async function() {
try {
console.log('Connecting to: ', process.env.POSTGRES_DBNAME, process.env.POSTGRES_USERNAME, process.env.POSTGRES_PASSWORD, process.env.POSTGRES_HOST);
await sequelize.authenticate();
console.log('Connection has been established successfully.');
} catch (error) {
console.error('Unable to connect to the database:', error);
}
})();
I even tried using a MySQL instance with RDS proxy, but it's the same - The test connection part doesn't execute, and neither success nor error messages appear in the logs. I wanted to understand if I am missing something. The DB has been configured to be accessible from outside.
My guess is that you have not configured the Lambda IAM permissions correctly. In order for Lambda to be able to access RDS, you can use the AWSLambdaVPCAccessExecutionRole, for CloudWatch logs to work you can add the AWSLambdaBasicExecutionRole to you Lambda function.
The AWS Lambda developer guide has a tutorial that gives an example of how this can be done.
For more details, please read the Configuring a Lambda function to access resources in a VPC chapter in the developer guide.
To connect to an Amazon RDS instance from a Lambda function, refer to this Amazon document: How do I configure a Lambda function to connect to an RDS instance?.
The problem turned out to be with my express package. My AWS configuration was correct, and replacing the Lambda entry code with vanilla DB connection and printing a list of values worked, but plugging it with the Express code worked. I am not sure what the issue was - I found that upgrading the express version fixed my problem.
Thank you everyone for taking the time out to answer my question.
Always watch out for the VPC security group where your DB runs , if your RDS DB is not public you have to put your lambda function inside of the same security group and same subnets.
Here you can find more details Connect to a MySQL - AWS RDS from an AWS Lambda hosted Express JS App

Lambda ==> RDS ==> QuickSight

What I'm trying to do
I am working on a lambda function which will simply register some metadata about files which are uploaded onto an s3 bucket. This is not about actually processing the data in the files yet. To start with, I just want to register the fact that certain files have been uploaded or not. Then I want to connect that metadata to QuickSight just so that we can have a nice visual about which files have been uploaded.
What I've done so far
This part is fairly easy:
Some simply python code with the pymysql module
Chalice to manage the process of creating and updating the lambda function
I created the database
Where I'm stuck
QuickSight is somehow external to AWS in general. So I had to create the RDS (mysql) in the DMZ of our VPC.
I have configured the security group so that the DB is accessible both from QuickSight and from my own laptop.
But the lambda function can't connect.
I configured the right policy for the role, so that the lambda can connect with IAM
I tested that policy with the simulator
But of course the lambda function is going to have some kind of dynamic IP and that needs to be in the security group
Any Ideas ??
I am even thinking about this right?
Two things.
You shouldn't have to put your RDS in a DMZ. See this article about granting QuickSight access to your RDS: https://docs.aws.amazon.com/quicksight/latest/user/enabling-access-rds.html
In order for a lambda to access something in a VPC (like and RDS instance) the lambda must have a VPC configuration. https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html

AWS - What is the simplest way to get a dump of all allowed inbound IP addresses?

Pretty new to the AWS APIs/Lambda so apologies if I'm missing something simple. I just want to get an automated dump of the inbound IP addresses under each of our security groups on a weekly interval. Is this something I can setup under lambda or do I need to do it through the API or CLI? I've looked at the DescribeSecurityGroup functions under https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSecurityGroups.html and https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-security-groups.html, but am wondering if I'm overcomplicating. Thanks in advance.
You would need to write an AWS Lambda function that queries the security groups for CIDR rules, using the AWS SDK for whatever programming language you are writing the Lambda function in. Then you could write the output to a file in the /tmp folder of the AWS Lambda environment, and then copy that file to S3 using the AWS SDK. Then you would schedule the Lambda function to run weekly.
If you already have an EC2 instance running on AWS then the "simplest" way would probably be to add a cron job to that instance that uses the AWS CLI tool to query for the CIDR rules.

AWS Glue ETL job from AWS Redshift to S3 fails

I am trying out AWS Glue service to ETL some data from redshift to S3. Crawler runs successfully and creates the meta table in data catalog, however when I run the ETL job ( generated by AWS ) it fails after around 20 minutes saying "Resource unavailable".
I cannot see AWS glue logs or error logs created in Cloudwatch. When I try to view them it says "Log stream not found. The log stream jr_xxxxxxxxxx could not be found. Check if it was correctly created and retry."
I would appreciate it if you could provide any guidance to resolve this issue.
So basically, the job you add to Glue will either run if there's not too much traffic in the region your Glue is. If there are no resources available, you need to either manually re-add the job again or you can also bind yourself to events from CloudWatch via SNS.
Also, there are parameters you can pass to the job like maximunRetry and timeout.
If you have a Ressource not available, it won't trigger a retry because the job did not fail, it just didn't even started. But if you set the timeout to let's say 60 minutes, it will trigger an error after that time, decrement your retry pool and re-launch the job.
The closest thing I see to Glue documentation on this is here:
If you encounter errors in AWS Glue, use the following solutions to
help you find the source of the problems and fix them. Note The AWS
Glue GitHub repository contains additional troubleshooting guidance in
AWS Glue Frequently Asked Questions. Error: Resource Unavailable If
AWS Glue returns a resource unavailable message, you can view error
messages or logs to help you learn more about the issue. The following
tasks describe general methods for troubleshooting. • A custom DNS
configuration without reverse lookup can cause AWS Glue to fail. Check
your DNS configuration. If you are using Amazon Route 53 or Microsoft
Active Directory, make sure that there are forward and reverse
lookups. For more information, see Setting Up DNS in Your VPC (p. 23).
• For any connections and development endpoints that you use, check
that your cluster has not run out of elastic network interfaces.
I have recently struggled with Resource Unavailable thrown by Glue Job
Also i was not able to make a direct connection in Glue using RDS -it said "no suitable security group found"
I faced this issue while trying to connect with AWS RDS and Redshift
The problem was with the Security Group that the Redshift was using. There is a need to place a self referencing inbound rule in the Security Group.
For those who dont know what is self referencing inbound rule, follow the steps
1) Go to the Security Group you are using (VPC -> Security Group)
2) In the Inbound Rules select Edit Inbound Rules
3) Add a Rule
a) Type - All Traffic b) Protocol - All c) Port Range - ALL d) Source - custom and in space available write the initial of your security group and select it. e) Save it.
Its done !
if you were missing this condition in your Security Group Inbound Rules
Try creating the connection you will be able to create the connection.
Also job should work this time.

Not able to launch DynamoDB Storage Backend for Titan with Gremlin Server on Amazon EC2 by using a AWS CloudFormation template

I am trying to configure DynamoDB Storage Backend for Titan with Gremlin Server on Amazon EC2 by using a AWS CloudFormation template, by following the link here
Under Prerequisites its mentioned that we need gremlin-server and dynamodb.properties files.
Q1. I have got the files from github link. Are these the correct files and i can use as is OR do i need to modify the contents of these files ?
Q2. On using these files as is, i am able to create CloudFormation stack which creates a EC2 instance but i am not able to SSH to this EC2 instance . Also it is not creating a gremlin server and no tables in DynamoDB, but as per the documentation it should. I am not getting what i am doing wrong?
Any help is highly appreciated.
Please try the new CloudFormation template available on the JanusGraph branch. I tested it yesterday and it appears to work.