When TLS is disabled, I can connect successfully through my lambda function using the same code as shown here - https://docs.aws.amazon.com/documentdb/latest/developerguide/connect.html#w139aac29c11c13b5b7
However, when I enable TLS and use the TLS enabled code sample from above link, my lambda function times out. I've downloaded rds combined ca pem file through wget and I am deploying the pem file along with my code to the AWS lambda.
This is the code where my execution stops and times out:
caFilePath = "rds-combined-ca-bundle.pem"
var connectionStringTemplate = "mongodb://%s:%s#%s:27017/dbname?ssl=true&sslcertificateauthorityfile=%s"
var connectionURI = fmt.Sprintf(connectionStringTemplate, secret["username"], secret["password"], secret["host"], caFilePath)
fmt.Println("Connection String", connectionURI)
client, err := mongo.NewClient(options.Client().ApplyURI(connectionURI))
if err != nil {
log.Fatalf("Failed to create client: %v", err)
}
I don't see any errors in the cloudwatch logs after the "Connection string" print.
I suspect Its an issue with your VPC design
Connecting to an Amazon DocumentDB Cluster from Outside an Amazon VPC,
check the last paragraph
https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html
also, the below link is giving detailed instructions
https://blog.webiny.com/connecting-to-aws-documentdb-from-a-lambda-function-2b666c9e4402
Can you try creating lambda test function using python and see if your having the issue
import pymongo
import sys
##Create a MongoDB client, open a connection to Amazon DocumentDB as a replica set and specify the read preference as secondary preferred
client = pymongo.MongoClient('mongodb://<dbusername>:<dbpassword>#mycluster.node.us-east-1.docdb.amazonaws.com:27017/?ssl=true&ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs0&readPreference=secondaryPreferred')
##Specify the database to be used
db = client.test
##Specify the collection to be used
col = db.myTestCollection
##Insert a single document
col.insert_one({'hello':'Amazon DocumentDB'})
##Find the document that was previously written
x = col.find_one({'hello':'Amazon DocumentDB'})
##Print the result to the screen
print(x)
##Close the connection
client.close()
Related
Intermittently getting the following error when connecting to an AWS keyspace using a lambda layer
All host(s) tried for query failed. First host tried, 3.248.244.53:9142: Host considered as DOWN. See innerErrors.
I am trying to query a table in a keyspace using a nodejs lambda function as follows:
import cassandra from 'cassandra-driver';
import fs from 'fs';
export default class AmazonKeyspace {
tpmsClient = null;
constructor () {
let auth = new cassandra.auth.PlainTextAuthProvider('cass-user-at-xxxxxxxxxx', 'zzzzzzzzz');
let sslOptions1 = {
ca: [ fs.readFileSync('/opt/utils/AmazonRootCA1.pem', 'utf-8')],
host: 'cassandra.eu-west-1.amazonaws.com',
rejectUnauthorized: true
};
this.tpmsClient = new cassandra.Client({
contactPoints: ['cassandra.eu-west-1.amazonaws.com'],
localDataCenter: 'eu-west-1',
authProvider: auth,
sslOptions: sslOptions1,
keyspace: 'tpms',
protocolOptions: { port: 9142 }
});
}
getOrganisation = async (orgKey) => {
const SQL = 'select * FROM organisation where organisation_id=?;';
return new Promise((resolve, reject) => {
this.tpmsClient.execute(SQL, [orgKey], {prepare: true}, (err, result) => {
if (!err?.message) resolve(result.rows);
else reject(err.message);
});
});
};
}
I am basically following this recommended AWS documentation.
https://docs.aws.amazon.com/keyspaces/latest/devguide/using_nodejs_driver.html
It seems that around 10-20% of the time the lambda function (cassandra driver) cannot connect to the endpoint.
I am pretty familiar with Cassandra (I already use a 6 node cluster that I manage) and don't have any issues with that.
Could this be a timeout or do I need more contact points?
Followed the recommended guides. Checked from the AWS console for any errors but none shown.
UPDATE:
Update to the above question....
I am occasionally (1 in 50 if I parallel call the function (5 concurrent calls)) getting the below error:
"All host(s) tried for query failed. First host tried,
3.248.244.5:9142: DriverError: Socket was closed at Connection.clearAndInvokePending
(/opt/node_modules/cassandra-driver/lib/connection.js:265:15) at
Connection.close
(/opt/node_modules/cassandra-driver/lib/connection.js:618:8) at
TLSSocket.
(/opt/node_modules/cassandra-driver/lib/connection.js:93:10) at
TLSSocket.emit (node:events:525:35)\n at node:net:313:12\n at
TCP.done (node:_tls_wrap:587:7) { info: 'Cassandra Driver Error',
isSocketError: true, coordinator: '3.248.244.5:9142'}
This exception may be caused by throttling in the keyspaces side, resulting the Driver Error that you are seeing sporadically.
I would suggest taking a look over this repo which should help you to put measures in place to either prevent the occurrence of this issue or at least reveal the true cause of the exception.
Some of the errors you see in the logs you will need to investigate Amazon CloudWatch metrics to see if you have throttling or system errors. I've built this AWS CloudFormation template to deploy a CloudWatch dashboard with all the appropriate metrics. This will provide better observability for your application.
A System Error indicates an event that must be resolved by AWS and often part of normal operations. Activities such as timeouts, server faults, or scaling activity could result in server errors. A User error indicates an event that can often be resolved by the user such as invalid query or exceeding a capacity quota. Amazon Keyspaces passes the System Error back as a Cassandra ServerError. In most cases this a transient error, in which case you can retry your request until it succeeds. Using the Cassandra driver’s default retry policy customers can also experience NoHostAvailableException or AllNodesFailedException or messages like yours "All host(s) tried for query failed". This is a client side exception that is thrown once all host in the load balancing policy’s query plan have attempted the request.
Take a look at this retry policy for NodeJs which should help resolve your "All hosts failed" exception or pass back the original exception.
The retry policies in the Cassandra drivers are pretty crude and will not be able to do more sophisticated things like circuit breaker patters. You may want to eventually use a "failfast" retry policy for the driver and handle the exceptions in your application code.
I have followed the AWS tutorial step by step. https://aws.amazon.com/premiumsupport/knowledge-center/iot-core-publish-mqtt-messages-python/
I have created the open-ended policy with the *, registered a thing and attached it to the policy, generated, downloaded, and activated the certificates. I have tried to connect and publish to a subscription using both the AWS IoT SDK for Python v2 and the original sdk but neither work. The code I'm using is straight from AWS's demo example connection code but they just wont connect.
While using the AWS IoT SDK for Python v2 I get this error message:
RuntimeError: 1038 (AWS_IO_FILE_VALIDATION_FAILURE): A file was read and the input did not match the expected value
While using the original SDK I get this error message:
TimeoutError: [Errno 60] Operation timed out
The python code I'm using:
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: MIT-0
import time as t
import json
import AWSIoTPythonSDK.MQTTLib as AWSIoTPyMQTT
# Define ENDPOINT, CLIENT_ID, PATH_TO_CERT, PATH_TO_KEY, PATH_TO_ROOT, MESSAGE, TOPIC, and RANGE
ENDPOINT = "XXXXX-ats.iot.ap-southeast-2.amazonaws.com"
CLIENT_ID = "testDevice"
PATH_TO_CERT = "certs/XXXX-certificate.pem.crt"
PATH_TO_KEY = "certs/XXXX-private.pem.key"
PATH_TO_ROOT = "certs/root.pem"
MESSAGE = "Hello World"
TOPIC = "test/testing"
RANGE = 20
myAWSIoTMQTTClient = AWSIoTPyMQTT.AWSIoTMQTTClient(CLIENT_ID)
myAWSIoTMQTTClient.configureEndpoint(ENDPOINT, 8883)
myAWSIoTMQTTClient.configureCredentials(PATH_TO_ROOT, PATH_TO_KEY, PATH_TO_CERT)
myAWSIoTMQTTClient.connect()
print('Begin Publish')
for i in range (RANGE):
data = "{} [{}]".format(MESSAGE, i+1)
message = {"message" : data}
myAWSIoTMQTTClient.publish(TOPIC, json.dumps(message), 1)
print("Published: '" + json.dumps(message) + "' to the topic: " + "'test/testing'")
t.sleep(0.1)
print('Publish End')
myAWSIoTMQTTClient.disconnect()
(I censored the endpoint and the certificate ID)
(I'm using a macbook air and on a public school network)
I went home and tested it and it works perfectly. If you have this same problem, try troubleshooting your network. I think my school blocks MQTT or something.
MQTT works with the particular port number 8883 which you will configure in myAWSIoTMQTTClient.configureEndpoint(ENDPOINT, 8883).
In one of my AWS IOT course I learnt that some network administrators will block all ports which are not commonly used, to avoid unwanted traffic and MQTT is something which is specific to IOT industry. This could be the reason why it did not worked when you tried in school network and it worked when you tried in your home.
I am developing an ASP.NET Core Web API project. In my project, I am using Hangfire to run the background task. Therefore, I am configuring the Hangfire to use the database like this.
public void ConfigureServices(IServiceCollection services)
{
services.AddHangfire(configuration =>
{
configuration.UseSqlServerStorage("Server=(LocalDB)\\MSSQLLocalDB;Integrated Security=true;");
});
//
}
In the above code, I am using Local DB. Now, I am trying to use AWS RDS database since I am deploying my application on the AWS Elastic Beanstalks. I created a function for getting the connection
public static string GetRDSConnectionString()
{
string dbname = "ebdb";
if(string.IsNullOrEmpty(dbname)) return null;
string username = "admin";
string password = "password";
string hostname = "cxcxcxcx.xcxcxcxc.eu-west-2.rds.amazonaws.com:1234";
string port = "1234";
return "Data Source=" + hostname + ";Initial Catalog=" + dbname + ";User ID=" + username + ";Password=" + password + ";";
}
I got the above code from the official AWS documentation. In the above code, what I am not clear is the database name, is the database name always be "ebdb"? I tried to find out the database name. But could not. In the tutorial, it is saying to use ebdb. So, I used it.
Then in configuration, I changed to this.
configuration.UseSqlServerStorage(AppConfig.GetRDSConnectionString());
When I run the code, it is giving me this error.
Win32Exception: The parameter is incorrect
Unknown location
SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 25 - Connection string is not valid)
System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, object providerInfo, bool redirectedUserInstance, SqlConnectionString userConnectionOptions, SessionData reconnectSessionData, bool applyTransientFaultHandling)
Win32Exception: The parameter is incorrect
Basically, it cannot connect to the database when I run my application. But I set the correct credentials. the only thing I double is the database name (ebdb). What is wrong with my configuration and what is wrong? How can I fix it?
Calling a few things out here just incase...
You have your port specified both in your host variable and as a separate port variable...but never use port.
Can you confirm that you are able to access your SQLServer via another means, such as from SQL Management Studio?
RDS uses SSL by default now for connections, my .NET is rusty but would you need to inform the connection string that it needs to run over a secure protocol?
& finally, regarding the AWS Security Group on your RDS instance. Have you opened up the correct port to your machine/network/IP?
This is the screenshot of the RDS db instance security group section in the console.
I am trying to connect to an AWS Redshift database from a lambda function using c#, dotnet core 2.0, and npgsql. I am having difficulty with SSL.
I have created two non-publicly-accessible Redshift databases in a dedicated VPC. The lambda executes in the same VPC. The two databases are identical in every way except that one has the "force SSL" parameter set to true.
Using the following code snippet, I can access the non-SSL database just fine:
using (var conn = new NpgsqlConnection ("Host=x; Port=5439; Username=x;
Password=x;Database=xxx")
{
Console.WriteLine("Redshift pre-Open!");
conn.Open();
Console.WriteLine("Redshift: post-Open!");
...
}
When I access the SSL database, I get the "missing hba.conf" error message - seems standard, I've seen it before ...
When I append to the connection string: "ssl Mode=Require;Server Compatibility Mode=Redshift;Trust Server Certificate=true"
the conn.open statement hangs, and the second write statement never shows up in cloudwatch.
And yet ... this connection statement works when accessing the same database thru a rest API and C#/dotnetcore 2 WEBAPI (same runtime environment), with
an EC2 instance and load balancer.
A Python lambda connecting to the SSL database, in the same environment - subnets, security groups, lambda triggers, lambda parameters, ... is working just fine.
The csproj references Amazon.Lambda.Core 1.0.0, Amazon.Lambda.Serialization.Json 1.1.0, and
Npgsql.EntityFrameworkCore.PostgreSQL 2.0.1.
I'd try Wireshark, maybe, in another environment - but running as a lambda, I'm not sure how best to debug. I've tried many permutations and combinations, and I wouldn't put it past myself to be missing something blindingly obvious,
but I absolutely do not see why hangs. Thank you.
I'm currently in need of connecting to a fake_sqs server for dev purposes but I can't find an easy way to specify endpoint to the boto.sqs connection. Currently in java and node.js there are ways to specify the queue endpoint and by passing something like 'localhst:someport' I can connect to my own sqs-like instance. I've tried the following with boto:
fake_region = regioninfo.SQSRegionInfo(name=name, endpoint=endpoint)
conn = fake_region.connect(aws_access_key_id="TEST", aws_secret_access_key="TEST", port=9324, is_secure=False);
and then:
queue = connAmazon.get_queue('some_queue')
but it fails to retrieve the queue object,it returns None. Has anyone achieved to connect to an own sqs instance ?
Here's how to create an SQS connection that connects to fake_sqs:
region = boto.sqs.regioninfo.SQSRegionInfo(
connection=None,
name='fake_sqs',
endpoint='localhost', # or wherever fake_sqs is running
connection_cls=boto.sqs.connection.SQSConnection,
)
conn = boto.sqs.connection.SQSConnection(
aws_access_key_id='fake_key',
aws_secret_access_key='fake_secret',
is_secure=False,
port=4568, # or wherever fake_sqs is running
region=region,
)
region.connection = conn
# you can now work with conn
# conn.create_queue('test_queue')
Be aware that, at the time of this writing, the fake_sqs library does not respond correctly to GET requests, which is how boto makes many of its requests. You can install a fork that has patched this functionality here: https://github.com/adammck/fake_sqs