AWS Nitro Enclave Socket Connection to Database - amazon-web-services

I'd like to host an app that uses a database connection in an AWS Nitro enclave.
I understand that the Nitro enclave doesn't have access to a network or persistent storage, and the only way that it can communicate with its parent instance is through the vsock.
There are some examples showing how to configure a connection from the enclave to an external url through a secure channel using the vsock and vsock proxy, but the examples focus on AWS KMS operations.
I'd like to know if it's possible to configure the secure channel through the vsock and vsock proxy to connect to a database like postgres/mysql etc...
If this is indeed possible, are there perhaps some example cofigurations somewhere?

Nitrogen is an easy solution for this, and it's completely open source (disclosure I'm one of the contributors to Nitrogen).
You can see an example configuration for deploying Redis to a Nitro Enclave here.
And a more detailed blog post walkthrough of deploying any Docker container to a Nitro Enclave here.
Nitrogen is a command line tool with three main commands:
Setup - Spawn an EC2 instance, configure SSH, and establish a VSOCK proxy for interacting with the Nitro Enclave.
Build - Create a Docker image from an arbitrary Dockerfile, and convert it to the Enclave Image File (EIF) format expected by Nitro.
Upload your EIF and launch it as a Nitro Enclave. You receive a hostname and port which is ready to proxy enclave requests to your service.
You can setup, build, and deploy any Dockerfile in a few minutes to your own AWS account.

I would recommend looking into Anjuna Security's offering: https://www.anjuna.io/amazon-nitro-enclaves
Outside of using Anjuna, you could look into the AWS Nitro SDK and use that to build a networking stack to utilize the vsock or modify an existing sample.

Related

Is it possible to set up auto-scaling so that it always duplicates the most recent version of your main server?

I know that you can create an image of your server as-is and setup auto-scaling on that, but what if I then make changes to my original server? Do I have to then make another snapshot of that and setup auto-scaling again?
There are two approaches for configuring a server:
Creating an Amazon Machine Image (AMI) with all software fully configured, or
Having the instance configure itself via a startup script triggered via User Data
A fully-configured AMI is fast to startup, whereas a configuration script can take several minutes to configure the instance before the instance is ready to accept traffic.
In general, it is not considered good practice to "make changes to my original server" because there is no concept of an "original server". All instances are considered equal. Instead, the configuration should be created and tested on development servers separate to any production servers and then released by deploying new servers or by having an 'update' script intelligently run on existing servers. Some of these capabilities are provided by AWS CodeDeploy.

AWS Datapipeline RDS to S3 Activity Error: Unable to establish connection to jdbc://mysql:

I am currently setting up a AWS Data Pipeline using the RDStoRedshift Template. During the first RDStoS3Copy activity I am receiving the following error:
"[ERROR] (TaskRunnerService-resource:df-04186821HX5MK8S5WVBU_#Ec2Instance_2021-02-09T18:09:17-0) df-04186821HX5MK8S5WVBU amazonaws.datapipeline.database.ConnectionFactory: Unable to establish connection to jdbc://mysql:/myhostname:3306/mydb No suitable driver found for jdbc://mysql:/myhostname:3306/mydb"
I'm relatively new with AWS services, but it seems that the copy activity spins up an EC2 instance for the copy activity. The error clearly states there isn't a drive available. Do I need to stand up an EC2 instance for AWSDataPipeline to use and install the driver there?
Typically when you are coding a solution that interacts with a MySQL RDS instance, esp a Java solution such a Lambda function written using Java runtime API or a cloud based web app (ie - Spring Boot web app), you specify the driver file using a POM/Gradle dependency.
For this use case, there seems to be information here about a Driver file: https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-jdbcdatabase.html

AWS: need guidance to deploy my Django Project

I have a Django web app. I am planning to deploy on the AWS web server.
I am using celery and rabbitmq que manager for my application.
I have read about the AWS services.
I have two options use :
1) AWS Elastic Beanstalk or
2) Create an EC2 instance of linux and install postgresql, celery, rabbitmq etc
So which is better to use.
AWS EC2 is always a better option as it gives you complete access on the OS and physical access to the data storage. This will help you to manage your application is a much more efficient way. Also EC2 instance can not only host a single application but can have as much ever applications that you require(depends on the capacity/instance type of the server). This will let you tweak the webserver proxy as well.
In case of Beanstalk you do not get similar options, you have to manage the applications with the options that are available to you.
To summarise:
In case you want complete control of you application - Use EC2.
If you are looking for a managed service wherein not much control is required you can opt for Beanstalk. Personally I would like to have the entire control over my application ;)

How to log from multiple ec2 instances(load balanced) to a common server using AWS

How to log from multiple ec2 instances(load balanced) to a common server using AWS.
I have multiple images of ec2 instance with apache servers . I want to log all the log data to a common server.
Do AWS provide any tools for doing this.
AWS Cloud Watch has this feature where you can add multiple server logs and monitor them through Cloud Watch console. See below steps
http://cloudacademy.com/blog/centralized-log-management-with-aws-cloudwatch-part-1-of-3/

Access one environment from another in Engine Yard

We have a couple of environments in Engine Yard. Each of them runs the same application, but on different stages: production, staging, etc. In total about 10 environments. Now, we want to dump the production database every night, and restore it on the rest of environments to have the latest data.
The problem is, an instance from one environment can't access instances in other environments. There are two ways to connect that are suitable for us:
SSH.
Specify the RDS host as the --host parameter to mysqldump. The RDS host is of the form environment.random_string.region.rds.amazonaws.com as opposed to a regular EC2 host name.
Neither of them works out of box. The straightforward solution would be to generate RSA keys on all the servers that want access, and add them to authorized_hosts to all the servers that should allow access. However, this solution isn't scalable: once we add or recreate an environment we'd need to repeat process.
Is there any better solution?
There is a way to setup a special backup configuration file on your other instances that would allow you to directly access the Production S3 bucket from another environment within the same account. There is some risk involved with this since it would also technically allow your non-production environment the ability to edit the contents of the production bucket.
There may be some other options depending on the specifics of your configuration. Your best option would be to open a ticket with the Engine Yard Support team so we can discuss your needs further.
Is it possible to set up a separate HUB server with FTP or SFTP service only?
open inbound port 21/22 from all environments to that HUB server, so all clients can download the database dump.
open inbound port 3306 or other database port from Hub Server to RDS/Database.
run cron job on Hub server to get the db dump, push the dump to other environment and so on.
Backup your production to a S3 bucket created for this purpose.
Use IAM roles to control how your other environments can connect to the same bucket.
Since the server of your Production environment should be known you can use a script to mysqldump that one server to the shared S3 bucket.
Once complete, your other servers can collect the data from that S3 bucket using a properly authorized IAM role.