How to set up a front end for AWS DBs without using the Internet - amazon-web-services

On AWS, I know how to set up a web server with inbound rules allowing HTTP and HTTPS and a database security group that only connect to the web server. The issue is I need to create a front end to manage the databases without using Internet access - this will be internal only and precludes the use of a public IP / public DNS. Does anyone know how I would do this?
To further elaborate, some of our AWS accounts are for internal use only - we can log in to the console, use CygWin to SSH in, see what's there, etc. But these accounts are for development purposes, and in a large enterprise such as this one, these are not allowed an IGW. So - no inbound Internet access is allowed. How do I create an app (e.g., phpMyAdmin type) in which our manager can easily view and edit the data in the database given the restriction that this must be done without inbound Internet access?

Host your database on RDS inside a VPC and create a VPN connection between your client network and your VPC.

host your database on one EC2 and also upload your front end there. your database will be running on locally on EC2 and you can connect front end to database. where database will not have public DNS it will running locally you can access only using SSH and front end script.
you check this official documentation from aws : https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html
for frontend script you can use https://www.adminer.org/ which is one file database management system. one simple file is there using this make connection to locally running database on EC2

Related

How to connect to Google Cloud SQL with enforced SSL from within App Engine app

I would like to have SSL enforced when connecting to my Google Cloud SQL (postgres) instance, but I can't find a way to provide a certificate to my Python app deployed to App Engine - psycopg2 requires the certificate to have a proper rights: <cert> has group or world access; permissions should be u=rw(0600) or less, but since there's no access to chmod on App Engine, I don't know how to change it.
Any ideas?
To provide a context, the goal is primarily to enforce secure access to the DB outside of GCP (from my workstation). The problem with App Engine apps is a side-effect of this change. But maybe there's a different and better way to achieve the same?
Thanks.
Google Cloud App Engine uses the Cloud SQL proxy for any connections using the build in /cloudsql unix socket directory. These connections are automatically wrapped in an SSL layer until they reach you Cloud SQL instance.
You don't need to worry about using certificates, as the proxy automates this process for you.
This is a bit tricky because there is a chicken-and-egg problem here:
The Postgres driver wants a file (because it's a wrapper around a C library that reads the file)
And you cannot set the correct file permission inside app engine on boot.
There are several cases (and solutions) for this problem:
AppEngine Classic environment
Workaround - write the cert file yourself in the /tmp ramdisk
Use a UNIX socket to connect to a Cloud SQL instance with public IP address (no SSL).
Or use a VPC and private VPC access for a cloud SQL instance with private IP address and use the workaround.
AppEngine Flex Environment
Include the cert file in your docker image
Workaround: Write the cert file yourself in /tmp
The easiest way to work around this problem is to write the cert file yourself into /tmp - this is a ramdisk in the app engine environment.
Something like this would do the trick:
import os
import psycopg2
certfile_path = '/tmp/certfile' # /tmp/ is a ramdisk
certfile_content = os.environ.get('CERTFILE')
if certfile_content != None:
with open(certfile_path,'w+') as fp:
fp.write(certfile_content)
os.chmod(certfile_path, 0o600)
# Now the certfile is "safe" to use, we can pass it in our connection string:
connection_string= f"postgres+prycopg2://host/database?sslmode=require&sslcert={certfile_path}"
else:
connection_string = 'postgres+prycopg2://host/database'
# or just trow an exception, log the problem, and exit.
(See Postgres connection string reference for SSL options)
Long term solution
The point of using SSL is to have both encrypted traffic between the database and you, plus avoiding man-in-the-middle attacks.
We have some options here:
Option 1: When you're inside App Engine AND your cloud SQL instance has a public IP address.
In this case, you can connect to a Postgres instance with a Unix domain socket. Google uses a proxy listening on that socket and will take care of encryption and securing the endpoints (btw - SSL is disabled).
In this case, you'll need to add SQL Client permissions to the service account you use AND add the UNIX socket directory in your Yaml file (see
Google documentation).
Option 2: When you're inside App Engine Classic AND your cloud SQL instance has a PRIVATE IP address.
If your cloud SQL instance has a private IP, this can be a bit "simpler".
We need to enable Serverless VPC Access - this connects AppEngine to the VPC where we have resources on private IP addresses (for public IPs it not needed because public IP addresses don't overlap) AND the Cloud SQL server must be connected to the VPC.
Now you can connect to Postgres as usual. The traffic inside the VPC network is encrypted. If you have just this CloudSQL instance in the VPC there is no need to use SSL (nobody can put a sniffer on the network or do a Mitm attack).
But if you still want/need to use SSL, you have to use the workaround described before.
Option 3: Use AppEngine FLEX environment.
The flex environment is 'just' a docker image.
You'll need to build a custom docker image for the Python runtime that includes the certfile.
Remember the Flex environment is not included in the free tier.

Cannot connect to AWS RDS

I am trying to create a AWS RDS Sql Server database and connect to it from a local machine using SSMS. Later I'll be connecting from a web application (locally, then hosted somewhere eventually.) I am currently failing to connect to my instance (the instance is configured and running.) The error I'm getting is the network/instance related (not login.) Tried telnet and I can't even hit it that way.
Looking on the web, there seems to be a setup for network connections but it talks about EC2, VPC and things I don't think I need (or do I?)
Tried (nothing worked so far): Using the IP instead of hostname, explicitly specifying the port (1433), changing user/password, crying.
Speaking of things I hope I don't need to configure, there's also IAM authentication - didn't touch that yet.
Any input is appreciated before I open a ticket with Amazon.
UPDATE:
My scenario: Scenario
Solution - add the Inbound Rule to default Security Group: Security Groups
When you work with RDS, you need to set inbound rules; otherwise, you are unable to connect to the database. This concept is covered in this AWS tutorial. In this AWS tutorial, the database is MySQL and the app is a Java web app. However, the same concepts apply with respect to inbound rules:
Creating the Amazon Relational Database Service item tracker
One tip -- when you set an inbound rule to let your development machine connect, you can select MyIP...
Also - when you host your app (for example Elastic Beanstalk), you need to set an inbound rule for that as well (as discussed in that tutorial)

AWS EC2 for QuickBooks

AWS and network noob. I've been asked to migrate QuickBooks Desktop Enterprise to AWS. This seems easy in principle but I'm finding a lot of conflicting and confusing information on how best to do it. The requirements are:
Setup a Windows Server using AWS EC2
QuickBooks will be installed on the server, including a file share that users will map to.
Configure VPN connectivity so that the EC2 instance appears and behaves as if it were on prem.
Allow additional off site VPN connectivity as needed for ad hoc remote access
Cost is a major consideration, which is why I am doing this instead of getting someone who knows this stuff.
The on-prem network is very small - one Win2008R2 server (I know...) that hosts QB now and acts as a file server, 10-15 PCs/printers and a Netgear Nighthawk router with a static IP.
My approach was to first create a new VPC with a private subnet that will contain the EC2 instance and setup a site-to-site VPN connection with the Nighthawk for the on-prem users. I'm unclear as to if I also need to create security group rules to only allow inbound traffic (UDP,TCP file sharing ports) from the static IP or if the VPN negates that need.
I'm trying to test this one step at a time and have an instance setup now. I am remote and am using my current IP address in the security group rules for the test (no VPN yet). I setup the file share but I am unable to access it from my computer. I can RDP and ping it and have turned on the firewall rules to allow NB and SMB but still nothing. I just read another thread that says I need to setup a storage gateway but before I do that, I wanted to see if that is really required or if there's another/better approach. I have to believe this is a common requirement but I seem to be missing something.
This is a bad approach for QuickBooks. Intuit explicitly recommends against using QuickBooks with a file share via VPN:
Networks that are NOT recommended
Virtual Private Network (VPN) Connects computers over long distances via the Internet using an encrypted tunnel.
From here: https://quickbooks.intuit.com/learn-support/en-us/configure-for-multiple-users/recommended-networks-for-quickbooks/00/203276
The correct approach here is to host QuickBooks on the EC2 instance, and let people RDP (remote desktop) into the EC2 Windows server to use QuickBooks. Do not let them install QuickBooks on their client machines and access the QuickBooks data file over the VPN link. Make them RDP directly to the QuickBooks server and access it from there.

Run multiple servers with interconnection on Amazon AWS

We are developing applications and devices that communicate with our servers. We have one "main" Java Spring server which handles almost all the HTTP requests including user authentication, storing relevant user data and giving that data to the applications. Furthermore, we have a few smaller HTTP servers (written in golang) which are both used by the "main" server to perform certain tasks but also have some public API's that apps and devices use directly.
In our current non-production setup we run all the servers locally on one machine with an apache2 in front which directs the requests. So the servers can be accessed via the apache2 by a user by their respective subdomains but they also perform some communication between each other. When doing so, currently we simply send the request to localhost:{PORT} since they all run on the same machine. They furthermore all utilize the same mysql-server running on that same machine.
We are now looking to get it more production-ready and are looking to deploy it to AWS. They are currently not containerized so a solution that requires containerization (ECS? K8s?) would most likely require more work. What would be the most straightforward way to do the following:
Deploy a number of servers on AWS where they are exposed publicly with their respective domains but can also communicate internally with one another (or would they just communicate with one another using their public domains?)
Deploy a managed SQL database (Amazon RDS?) which is accessible for all the servers.
Setup the routing of the requests. Currently run our own configured apache2 but I assume we can add a managed API Gateway in AWS and configure it for our servers.
Q. Deploy a number of servers on AWS where they are exposed publicly
with their respective domains but can also communicate internally with
one another (or would they just communicate with one another using
their public domains?)
On AWS you create a VPC(1st default VPC is created when you login for the first time).
You can deploy a number of EC2 instances(virtual servers) with just private IP addresses and without any public access and put them behind an ELB(elastic load balancer). The ELB will take all the traffic and distribute the load onto the servers based on endpoint.
However the EC2 instances won't have public IPs A VPC(virtual Private Gateway) allows your services to communicate to each other via private IPs (something like 172.31.xx.xx), You can also provide domain/sub-domain names to these private IP addresses using Route53 service of AWS.
For example You launch 2 servers:
Your Java Application - on 172.31.1.1 (you name it
xyz.myjavaapp.something.com on Route53)
Your Angular Application - on 172.31.1.2
The angular application can reach your java application on 172.31.1.1:8080 or
xyz.myjavaapp.something.com:8080
Q. Deploy a managed SQL database (Amazon RDS?) which is accessible for
all the servers.
Yes you can deploy an SQL database on RDS and it will be available to the EC2 instances. Just make sure you create proper security groups to allow only your servers to access it, and not leave it open for public internet.
Example for a VPC only security group entry is 172.31.0.0/16 This will allow only ther servers in you VPC to connect to the RDS DB. given that your VPC subnet has the range 172.31.x.x
Q. Setup the routing of the requests. Currently run our own configured
apache2 but I assume we can add a managed API Gateway in AWS and
configure it for our servers.
You can set up public/private APIs and manage different endpoints using API Gateway.
Another way it to put your application server behind an Application ELB. The ELB can take care of load balancing as well as endpoint management.
for example :
if you decide to deploy 2 servers for /getData and 1 server for /doSomethingElse. It can be easily managed by ELB.
I would suggest you use at-least servers for critical services and load balance them behind and ELB for production env.
On another note, containerizing and deploying to kubernetes is not that difficult or time consuming. But yes it has got some learning curve, but the benefits outweigh it.
Feel free to ask questions.

AWS Instance Security Group to give access to itself via TCP

I have an Apache server running the front end (Angular) which relies on an API which is hosted on the same instance as the Apache. I don't want my API (Express) open to public yet but need access to it with my front end which shares the same IP. Things I've tried,
Setting API base url as 'localhost' doesn't seem to work.
Adding a security rule in AWS security groups to allow connections only to the same IP (to itself) doesn't work.
Is there any workaround for this?
Connections to same IP are always open by default. You may need to add private IP of the ec2 instance as your api base URL. (Port you know better). Cors too should be enabled for that private IP.
First of all, using Angular as the front-end means needing an API that can access publicly you just need to implement securities, because you just serve the UI to the client user and their browsers are the one accessing the API not the server of the angular.
You can setup another API which can be deploy on the same server of your UI and same url which will serve as controller of your "Private API" that you can manage using Security groups in AWS
Replaced ${IP} to 172.17.0.1 so it can connect to the same EC2 after restarting. Add a rule for the inbound connection from the same SG