Run multiple servers with interconnection on Amazon AWS - amazon-web-services

We are developing applications and devices that communicate with our servers. We have one "main" Java Spring server which handles almost all the HTTP requests including user authentication, storing relevant user data and giving that data to the applications. Furthermore, we have a few smaller HTTP servers (written in golang) which are both used by the "main" server to perform certain tasks but also have some public API's that apps and devices use directly.
In our current non-production setup we run all the servers locally on one machine with an apache2 in front which directs the requests. So the servers can be accessed via the apache2 by a user by their respective subdomains but they also perform some communication between each other. When doing so, currently we simply send the request to localhost:{PORT} since they all run on the same machine. They furthermore all utilize the same mysql-server running on that same machine.
We are now looking to get it more production-ready and are looking to deploy it to AWS. They are currently not containerized so a solution that requires containerization (ECS? K8s?) would most likely require more work. What would be the most straightforward way to do the following:
Deploy a number of servers on AWS where they are exposed publicly with their respective domains but can also communicate internally with one another (or would they just communicate with one another using their public domains?)
Deploy a managed SQL database (Amazon RDS?) which is accessible for all the servers.
Setup the routing of the requests. Currently run our own configured apache2 but I assume we can add a managed API Gateway in AWS and configure it for our servers.

Q. Deploy a number of servers on AWS where they are exposed publicly
with their respective domains but can also communicate internally with
one another (or would they just communicate with one another using
their public domains?)
On AWS you create a VPC(1st default VPC is created when you login for the first time).
You can deploy a number of EC2 instances(virtual servers) with just private IP addresses and without any public access and put them behind an ELB(elastic load balancer). The ELB will take all the traffic and distribute the load onto the servers based on endpoint.
However the EC2 instances won't have public IPs A VPC(virtual Private Gateway) allows your services to communicate to each other via private IPs (something like 172.31.xx.xx), You can also provide domain/sub-domain names to these private IP addresses using Route53 service of AWS.
For example You launch 2 servers:
Your Java Application - on 172.31.1.1 (you name it
xyz.myjavaapp.something.com on Route53)
Your Angular Application - on 172.31.1.2
The angular application can reach your java application on 172.31.1.1:8080 or
xyz.myjavaapp.something.com:8080
Q. Deploy a managed SQL database (Amazon RDS?) which is accessible for
all the servers.
Yes you can deploy an SQL database on RDS and it will be available to the EC2 instances. Just make sure you create proper security groups to allow only your servers to access it, and not leave it open for public internet.
Example for a VPC only security group entry is 172.31.0.0/16 This will allow only ther servers in you VPC to connect to the RDS DB. given that your VPC subnet has the range 172.31.x.x
Q. Setup the routing of the requests. Currently run our own configured
apache2 but I assume we can add a managed API Gateway in AWS and
configure it for our servers.
You can set up public/private APIs and manage different endpoints using API Gateway.
Another way it to put your application server behind an Application ELB. The ELB can take care of load balancing as well as endpoint management.
for example :
if you decide to deploy 2 servers for /getData and 1 server for /doSomethingElse. It can be easily managed by ELB.
I would suggest you use at-least servers for critical services and load balance them behind and ELB for production env.
On another note, containerizing and deploying to kubernetes is not that difficult or time consuming. But yes it has got some learning curve, but the benefits outweigh it.
Feel free to ask questions.

Related

Exposing various ports behind a load balancer on Rancher/AWS

I am setting up a Rancher environment.
The Rancher server is behind a classic ELB (since ALBs are not recommended per Rancher guidelines).
I also want to make available Prometheus and Grafana services.
These are offered via Rancher catalogue and will run as container services, being exposed on Rancher host ports 3000 and 9090.
Since Rancher server (per their recommendations) requires ELB, I wanted to explore the options on how to make available the two services above using the most minimal possible setup.
If the server is available on say rancher.mydomain.com, ideally I would like to have the other two on grafana.mydomain.com and prometheus.mydomain.com.
Can I at least combine the later two behind an ALB?
If so, how do I map them?
Do I place <my_rancher_host_public_IP>:3000 and <my_rancher_host_public_IP>:9090 behind an ALB?
You could do this a couple (maybe more) ways:
use an external dns updater like the route 53 infra catalog item. That will automatically map dns directly to the public ip of the host that houses the services. Modify the dns template so it prepends the service name to the domain.
register your targets and map the ports, then set a dns entry to the ALB.
The first way will allow for dns to update in case the service shifts across hosts in your environment. You could leverage the second way and force containers to specific hosts.

HashiCorp Vault - Setup / Architecture in Production

I'm getting ready to setup HashiCorp Vault with my web application, and while the examples HashiCorp provides make sense, I'm a little unclear of what the intended production setup should be.
In my case, I have:
a small handful of AWS EC2 instances serving my web application
a couple EC2 instances serving Jenkins for continuous deployment
and I need:
My configuration software (Ansible) and Jenkins to be able to read secrets during deployment
to allow employees in the company to read secrets as needed, and potentially, generate temporary ones for certain types of access.
I'll probably be using S3 as a storage backend for Vault.
The types of questions I have are:
Should vault be running on all my EC2 instances, and listening at 127.0.0.1:8200?
Or do I create an instance (maybe 2 for availability) that just run Vault and have other instances / services connect to those as needed for secret access?
If i needed employees to be able to access secrets from their local machines, how does that work? Do they setup vault locally against the S3 storage, or should they be hitting the REST API of the remote servers from step 2 to access their secrets?
And to be clear, any machine that's running vault, if it's restarted, then vault would need to be unsealed again, which seems to be a manual process involving x number of key holders?
Vault runs in a client-server architecture, so you should have a dedicated cluster of Vault servers (usually 3 is suitable for small-medium installations) running in availability mode.
The Vault servers should probably bind to the internal private IP, not 127.0.0.1, since they they won't be accessible within your VPC. You definitely do not want to bind 0.0.0.0, since that could make Vault publicly accessible if your instance has a public IP.
You'll want to bind to the IP that is advertised on the certificate, whether that's the IP or the DNS name. You should only communicate with Vault over TLS in a production-grade infrastructure.
Any and all requests go through those Vault servers. If other users need to communicate with Vault, they should connect to the VPC via a VPN or bastion host and issue requests against it.
When a machine that is running Vault is restarted, Vault does need to be unsealed. This is why you should run Vault in HA mode, so another server can accept requests. You can setup monitoring and alerting to notify you when a server needs to be unsealed (Vault returns a special status code).
You can also read the production hardening guide for more tips.
Specifically for point 3 & 4:
They would talk to the Vault API which is running on one/more machines in your cluster (If you do run it in HA mode with multiple machines, only one node will be active at anytime). You would provision them with some kind of authentication, using one of the available authentication backends like LDAP.
Yes, by default and in it's recommended configuration if any of your Vault nodes in your cluster get restarted or stopped, you will need to unseal it with however many keys are required; depending on how many key shards were generated when you stood up Vault.

Amazon EC2 Penetration Request Form

I am writing Penetration Request Form to Amazon web services. I want help in filling the Request form, I am having dynamic IP, what shall I mention in Destination IP and source IP?
The server deployed in the AWS EC2 and running Ubuntu 12.02 OS.
The db is on another instance using the AWS RDS. No load balance - just a single server instance for the app and the single instance for the db.
Source: The IP(s) of the machines which will initiate the scan (usually machines external to AWS - like third party pentest service)
Destination: Your server IPs. It is better to assign elastic IPs and then use them to fill the request. You can use the dynamic IPs, but if your instances are stopped and started before your scan test starts, then your scan may fail since your servers will have a new IP.

How to make Web Services public

i created an android application that requires use of web service
i want it to be able to access the app everywhere therefore i need
my web services to be public with an external ip so i can access
what is the best way to do it?
I have an Amazon Web Services account i dont know if created an instance and run the web services there will be the best solution
My big problem with Amazon instance is that it takes a while to show in the app the result of the web service
Any ideas in how to make my web service public?
It appears that your requirement is:
Expose a public API endpoint for use by your Android application
Run some code when the API is called
There are two ways you could expose an API:
Use Amazon API Gateway, which that can publish, maintain, monitor, and secure APIs. It takes care of security and throttling. A DNS name is provided, which should be used for API calls. When a request is receive, API Gateway can pass the request to a web server or can trigger an AWS Lambda function to execute code without requiring a server.
Or, run an Amazon EC2 instance with your application. Assign an Elastic IP Address to the instance, which is a static IP address. Create an A record in Amazon Route 53 (or your own DNS server) that points a DNS name to that IP address.

Setting up multiple EC2 instances and multiple subdomain under one parent domain in AWS Route 53

I am developing a set of frontend webapps (for instance vaadin or angular) and backend RESTful services. Each frontend webapp will consume one or more of these backend services. I want both webapps and services to be secured over https.
Now, I want to register a single domain, say mydomain.com, and deploy the backend services such that they are available at
service1.api.mydomain.com, service2.api.mydomain.com etc. The frontend apps should be available at webapp1.mydomain.com, webapp2.mydomain.com etc.
I need to be able to setup two or more EC2 instances for the services, and the same for the webapps. For instance, service1 may be running instance A, service2 on instance B, and webapp1 on instance C, and webapp2 on instance D.
How do I configure this setup in AWS Route 53?
Since there is a limit to the max number of Elastic IPs (max 5) that can be allocated for one AWS account, I suppose separate public IPs for all the EC2 instances is not a solution, since I will be having more than 5 such subdomains.
I hope you can provide a practical example configuration with two services and two webapps.
You can submit a request to get the Elastic IP (EIP) limit increased for your account. Small increases (e.g. from 5 to 10) should be fairly quick and easy to obtain. Larger increases should be obtainable if you can justify it to AWS support.
https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase&limitType=service-code-vpc
If you're open to using path-based routing instead of subdomain based (e.g. mydomain.com/app1 and mydomain.com/app1/api) or a mix of the two (e.g. app1.mydomain.com and app1.mydomain.com/api), you could look at using an Application Load Balancer (ALB). You would need one ALB per subdomain used.
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/tutorial-load-balancer-routing.html
Note: I expect subdomain-based routing to be available with the ALB in the future, but it hasn't been released yet.
ALBs could be cheaper than using Classic Elastic Load Balancers (ELBs), but if you're not using the load balancing functionality at all, EIPs may be your best bet since they're free when attached to a running instance.