AWS DocumentDB connecting with python and TLS pem bundle questions - amazon-web-services

We use AWS fargate with a python project. AWS default setup is using a PEM file when connecting. I know I can turn of TLS.
My coworker says he doesn't want to store credentials in the same repo as code. What is the recommended storage location of that file?
Why do I need it when the servers inside a VPC?
Do I need a different PEM file if I create a cluster on AWS govcloud or does the bundle include all I need?
Do I need it if I'm using an AWS linux 2 instance?

Please find the answers:
You can store the PEM file anywhere(same repository, or any other repository where the code can pull from), but it should be accessible to the code when making an encrypted connection and perform server validation.
The communication between the servers in VPC is private, but using a server certificate provides an extra layer of security by validating that the connection is being made to an Amazon DocumentDB cluster.
An Amazon DocumentDB cluster in GovCloud region should have a similar separate bundle for TLS connection.
Even if you are using Amazon Linux 2 instance, the PEM certificate file would need to be stored on the instance to allow the code to refer it and validate when opening a connection.
It is always a best practice from security point of view to use TLS and authenticate the server with the certificate.

Related

How can i get my rds-ca-2019 pem file from AWS RDS Amazon

I have an RDS I have created with AWS of type postgresql
Under connectivity, i as see it defined a rds-ca-2019
I need that certificate for connection from a Java client application
I tried using global pem, but it seems not to match and failing on SSL connection
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html
Where can i get this rds-ca-2019 certificate
The download resource i was looking can be found as below in the attacked link
https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-download-ssl-certificate-for-managed-database

How to connect to Google Cloud SQL with enforced SSL from within App Engine app

I would like to have SSL enforced when connecting to my Google Cloud SQL (postgres) instance, but I can't find a way to provide a certificate to my Python app deployed to App Engine - psycopg2 requires the certificate to have a proper rights: <cert> has group or world access; permissions should be u=rw(0600) or less, but since there's no access to chmod on App Engine, I don't know how to change it.
Any ideas?
To provide a context, the goal is primarily to enforce secure access to the DB outside of GCP (from my workstation). The problem with App Engine apps is a side-effect of this change. But maybe there's a different and better way to achieve the same?
Thanks.
Google Cloud App Engine uses the Cloud SQL proxy for any connections using the build in /cloudsql unix socket directory. These connections are automatically wrapped in an SSL layer until they reach you Cloud SQL instance.
You don't need to worry about using certificates, as the proxy automates this process for you.
This is a bit tricky because there is a chicken-and-egg problem here:
The Postgres driver wants a file (because it's a wrapper around a C library that reads the file)
And you cannot set the correct file permission inside app engine on boot.
There are several cases (and solutions) for this problem:
AppEngine Classic environment
Workaround - write the cert file yourself in the /tmp ramdisk
Use a UNIX socket to connect to a Cloud SQL instance with public IP address (no SSL).
Or use a VPC and private VPC access for a cloud SQL instance with private IP address and use the workaround.
AppEngine Flex Environment
Include the cert file in your docker image
Workaround: Write the cert file yourself in /tmp
The easiest way to work around this problem is to write the cert file yourself into /tmp - this is a ramdisk in the app engine environment.
Something like this would do the trick:
import os
import psycopg2
certfile_path = '/tmp/certfile' # /tmp/ is a ramdisk
certfile_content = os.environ.get('CERTFILE')
if certfile_content != None:
with open(certfile_path,'w+') as fp:
fp.write(certfile_content)
os.chmod(certfile_path, 0o600)
# Now the certfile is "safe" to use, we can pass it in our connection string:
connection_string= f"postgres+prycopg2://host/database?sslmode=require&sslcert={certfile_path}"
else:
connection_string = 'postgres+prycopg2://host/database'
# or just trow an exception, log the problem, and exit.
(See Postgres connection string reference for SSL options)
Long term solution
The point of using SSL is to have both encrypted traffic between the database and you, plus avoiding man-in-the-middle attacks.
We have some options here:
Option 1: When you're inside App Engine AND your cloud SQL instance has a public IP address.
In this case, you can connect to a Postgres instance with a Unix domain socket. Google uses a proxy listening on that socket and will take care of encryption and securing the endpoints (btw - SSL is disabled).
In this case, you'll need to add SQL Client permissions to the service account you use AND add the UNIX socket directory in your Yaml file (see
Google documentation).
Option 2: When you're inside App Engine Classic AND your cloud SQL instance has a PRIVATE IP address.
If your cloud SQL instance has a private IP, this can be a bit "simpler".
We need to enable Serverless VPC Access - this connects AppEngine to the VPC where we have resources on private IP addresses (for public IPs it not needed because public IP addresses don't overlap) AND the Cloud SQL server must be connected to the VPC.
Now you can connect to Postgres as usual. The traffic inside the VPC network is encrypted. If you have just this CloudSQL instance in the VPC there is no need to use SSL (nobody can put a sniffer on the network or do a Mitm attack).
But if you still want/need to use SSL, you have to use the workaround described before.
Option 3: Use AppEngine FLEX environment.
The flex environment is 'just' a docker image.
You'll need to build a custom docker image for the Python runtime that includes the certfile.
Remember the Flex environment is not included in the free tier.

RDS SQL server TLS/SSL encrytion from application servers

Need to encrypt data in transit from application severs to RDS SQL server with SSL/TLS?
I see aws gives the option to make force encryption = true in parameter group with self signed certs.
Is there a way to use customer certs to import into RDS?
Any configuration steps to do this at application server and on RDS?
Appreciate any info on this . Didn't find anything in AWS knowledge base.
Note: Application servers sit behind load balancer.
For RDS SQL Server you will need to use the PEM that AWS provides for TLS.
You have a choice of either:
Root certificate
Intermediary and root certificate
The application server will need to have access to this certificate before it can connect to the RDS instance.
Unfortunately at this time only Aurora supports uploading your own certificates (and then accessing via ACM), you will need to use the provided one.
For connecting and configuring the RDS there is a specific Using SSL with a Microsoft SQL Server DB Instance page.

trying to find my ssl certificate I created on AWS Certificates

I setup my ec2 instance and got https working for a bit only to realize I need tls 1.2 on default and in order to do that I had to configure my code to instruct it to read my cert file in the code. Problem is I don't know which it is as there are 269 files in the directory /etc/ssl/certs. I have googled for a couple hours hoping something would tell me where to look to check what file amazon generated for me that it specifically wants. Otherwise im shooting in the dark trying pems one at a time.
secureProtocol: 'TLSv1_2_server_method',
pfx: fs.readFileSync("/etc/ssl/certs/FILENAME.PEM")
}, app).listen(443);
Help is greatly appreciated.
Please refer EC2 instance details in AWS management console.
Steps:
Login to AWS management console and goto EC2 -> Instances.
Select the instance to which we need to connect and scroll the
description which is present in bottom window which will have EC2
instance details.
Check for "Key pair name" , this will be the key pair which needs
to be used to securely connect to respective EC2 instance.
I assume that you got a certificate on Amazon ACM.
ACM Certificates can be used in,
Elastic Load Balancing
Amazon CloudFront
Amazon API Gateway
AWS Elastic Beanstalk
AWS CloudFormation(for email validation only)
The certificate issued by ACM cannot be installed directly on an EC2 instance.
If you want to install an SSL certificate directly on your EC2 instance, you will need to obtain a SSL certificate through a third-party
Therefore, you cannot find any files related to the certificate issued by ACM inside your EC2 instance.
hope this helps.

How to read write from Encrypted Amazon ElastiCache Redis Server without using stunnel?

I have Amazon ElastiCache Redis Server used as the Encryption in-transit and Encryption at-rest. From what I have read in the document:
https://aws.amazon.com/premiumsupport/knowledge-center/elasticache-connect-redis-node/
We need to use install stunnel and use through the localhost to access the server from our local environment or EC2 instance. Is there any way to avoid it? I am using Redisson as Java API.
Finally found a way to interact with AWS Encrypted Redis cluster without using Stunnel. Found that we can do it using prefix "rediss://" instead of "redis://" (extra s denotes it as a SSL client) while setting the address through the API.