How to read write from Encrypted Amazon ElastiCache Redis Server without using stunnel? - amazon-web-services

I have Amazon ElastiCache Redis Server used as the Encryption in-transit and Encryption at-rest. From what I have read in the document:
https://aws.amazon.com/premiumsupport/knowledge-center/elasticache-connect-redis-node/
We need to use install stunnel and use through the localhost to access the server from our local environment or EC2 instance. Is there any way to avoid it? I am using Redisson as Java API.

Finally found a way to interact with AWS Encrypted Redis cluster without using Stunnel. Found that we can do it using prefix "rediss://" instead of "redis://" (extra s denotes it as a SSL client) while setting the address through the API.

Related

How to connect to Google Cloud SQL with enforced SSL from within App Engine app

I would like to have SSL enforced when connecting to my Google Cloud SQL (postgres) instance, but I can't find a way to provide a certificate to my Python app deployed to App Engine - psycopg2 requires the certificate to have a proper rights: <cert> has group or world access; permissions should be u=rw(0600) or less, but since there's no access to chmod on App Engine, I don't know how to change it.
Any ideas?
To provide a context, the goal is primarily to enforce secure access to the DB outside of GCP (from my workstation). The problem with App Engine apps is a side-effect of this change. But maybe there's a different and better way to achieve the same?
Thanks.
Google Cloud App Engine uses the Cloud SQL proxy for any connections using the build in /cloudsql unix socket directory. These connections are automatically wrapped in an SSL layer until they reach you Cloud SQL instance.
You don't need to worry about using certificates, as the proxy automates this process for you.
This is a bit tricky because there is a chicken-and-egg problem here:
The Postgres driver wants a file (because it's a wrapper around a C library that reads the file)
And you cannot set the correct file permission inside app engine on boot.
There are several cases (and solutions) for this problem:
AppEngine Classic environment
Workaround - write the cert file yourself in the /tmp ramdisk
Use a UNIX socket to connect to a Cloud SQL instance with public IP address (no SSL).
Or use a VPC and private VPC access for a cloud SQL instance with private IP address and use the workaround.
AppEngine Flex Environment
Include the cert file in your docker image
Workaround: Write the cert file yourself in /tmp
The easiest way to work around this problem is to write the cert file yourself into /tmp - this is a ramdisk in the app engine environment.
Something like this would do the trick:
import os
import psycopg2
certfile_path = '/tmp/certfile' # /tmp/ is a ramdisk
certfile_content = os.environ.get('CERTFILE')
if certfile_content != None:
with open(certfile_path,'w+') as fp:
fp.write(certfile_content)
os.chmod(certfile_path, 0o600)
# Now the certfile is "safe" to use, we can pass it in our connection string:
connection_string= f"postgres+prycopg2://host/database?sslmode=require&sslcert={certfile_path}"
else:
connection_string = 'postgres+prycopg2://host/database'
# or just trow an exception, log the problem, and exit.
(See Postgres connection string reference for SSL options)
Long term solution
The point of using SSL is to have both encrypted traffic between the database and you, plus avoiding man-in-the-middle attacks.
We have some options here:
Option 1: When you're inside App Engine AND your cloud SQL instance has a public IP address.
In this case, you can connect to a Postgres instance with a Unix domain socket. Google uses a proxy listening on that socket and will take care of encryption and securing the endpoints (btw - SSL is disabled).
In this case, you'll need to add SQL Client permissions to the service account you use AND add the UNIX socket directory in your Yaml file (see
Google documentation).
Option 2: When you're inside App Engine Classic AND your cloud SQL instance has a PRIVATE IP address.
If your cloud SQL instance has a private IP, this can be a bit "simpler".
We need to enable Serverless VPC Access - this connects AppEngine to the VPC where we have resources on private IP addresses (for public IPs it not needed because public IP addresses don't overlap) AND the Cloud SQL server must be connected to the VPC.
Now you can connect to Postgres as usual. The traffic inside the VPC network is encrypted. If you have just this CloudSQL instance in the VPC there is no need to use SSL (nobody can put a sniffer on the network or do a Mitm attack).
But if you still want/need to use SSL, you have to use the workaround described before.
Option 3: Use AppEngine FLEX environment.
The flex environment is 'just' a docker image.
You'll need to build a custom docker image for the Python runtime that includes the certfile.
Remember the Flex environment is not included in the free tier.

AWS DocumentDB connecting with python and TLS pem bundle questions

We use AWS fargate with a python project. AWS default setup is using a PEM file when connecting. I know I can turn of TLS.
My coworker says he doesn't want to store credentials in the same repo as code. What is the recommended storage location of that file?
Why do I need it when the servers inside a VPC?
Do I need a different PEM file if I create a cluster on AWS govcloud or does the bundle include all I need?
Do I need it if I'm using an AWS linux 2 instance?
Please find the answers:
You can store the PEM file anywhere(same repository, or any other repository where the code can pull from), but it should be accessible to the code when making an encrypted connection and perform server validation.
The communication between the servers in VPC is private, but using a server certificate provides an extra layer of security by validating that the connection is being made to an Amazon DocumentDB cluster.
An Amazon DocumentDB cluster in GovCloud region should have a similar separate bundle for TLS connection.
Even if you are using Amazon Linux 2 instance, the PEM certificate file would need to be stored on the instance to allow the code to refer it and validate when opening a connection.
It is always a best practice from security point of view to use TLS and authenticate the server with the certificate.

RDS SQL server TLS/SSL encrytion from application servers

Need to encrypt data in transit from application severs to RDS SQL server with SSL/TLS?
I see aws gives the option to make force encryption = true in parameter group with self signed certs.
Is there a way to use customer certs to import into RDS?
Any configuration steps to do this at application server and on RDS?
Appreciate any info on this . Didn't find anything in AWS knowledge base.
Note: Application servers sit behind load balancer.
For RDS SQL Server you will need to use the PEM that AWS provides for TLS.
You have a choice of either:
Root certificate
Intermediary and root certificate
The application server will need to have access to this certificate before it can connect to the RDS instance.
Unfortunately at this time only Aurora supports uploading your own certificates (and then accessing via ACM), you will need to use the provided one.
For connecting and configuring the RDS there is a specific Using SSL with a Microsoft SQL Server DB Instance page.

aws elasticache redis set and get

I am new to AWS-SDK and I am running an node.js application on an EC2 instance.
I am trying to use ElastiCache-Redis in the node.js application. However, I can not find the API of ElastiCache to make basic Redis calls. The url below did not provide anything of Redis commands.
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/ElastiCache.html#addTagsToResource-property
What should I do to issue Redis command to ElastiCache(Redis) in aws-sdk?
The ElastiCache API is used to start, stop & configure Redis and Memcached instances. It is not used for communicating with those instances.
In order to send commands to Redis or Memcached you need to use normal clients, here is the list of Node.js clients for Redis: http://redis.io/clients#nodejs

ElasticSearch install on AWS - unable to connect on public ip/host

I have an single EC2 CentOS instance with ElasticSearch installed.
I am unable to connect externally using the public ip or hostname.
ElasticSearch starts correctly and I can access locally on the machine using:
CURL <my_internal_ip>:9200
However running the same remotely using the public ip fails.
I have the the cloud-aws plugin is installed
I have setup an AWS security group with all tcp ports open for testing
I am guessing I need to bind the address within the elasticsearch.yml file, however do not understand which setting to use, and with what address. Setting the network.host to an external address stops ES from starting - unable to bind.
Appreciate any comments.
Amazon Ec2 Instance is use remotely according with the Elastic Search Installed that provide the aws cloud plugins
After many hours....
IP Tables was enabled on the OS as default blocking my elasticsearch ports.
I've the same issue and followed below steps and get resolved:
Elasticsearch process memory locking failed
MAX_LOCKED_MEMORY=unlimited
LimitMEMLOCK=infinity
Do these two steps also.