App running in AWS EC2 has been seen under different client IP while connecting to different servers. Why? - amazon-web-services

I have 2 servers running in my own hosting: auth-server and application server. Auth server issues an auth-token for application server. Application server checks issued token and client ip (it must match ip that was used to get the token). In majority of cases it works OK but not when client is in AWS EC2. For some reason client IP (how client is seen for my servers) changes when client is located in AWS EC2 instance. Is it normal situation that AWS can use different network interfaces to connect to different servers?

Related

AWS Load Balancing all of same websocket route to the same instance

I'm writing an application where multiple users connect to a websocket server with the following url as an example wss://example.com/ws/1234.
Multiple users may connect to the same one and I require them to all connect to the same ec2 instance. All that try to connect to wss://example.com/ws/1234 will go to the same server, and all users that connect to wss://example.com/ws/4325 will also go to the same server. These routes are generated dynamically.
If a client is the first client to connect to an endpoint, it will route them to the server with the least CPU load. If a client is connecting to an endpoint that has already been connected to, they will be sent to the same server.
I've tried going into the listener rules for my ec2 auto-scaling group. But, I couldn't find any settings that seemed like they would do the trick.
My attempt:

I want to deploy a multi-tier web app into AWS but don't understand how to set it up

I was hoping someone may be able to explain how I would setup a multi-tiered web application. There is a database tier, app tier, web server tier and then the client tier. I'm not exactly sure how to separate the app tier and web server tier since the app tier will be in a private subnet. I would have the client send the request directly to the app server but the private net is a requirement. And having the app server separated from the web server is a requirement as well.
The only idea I have had was to serve the content on the web server and then the client will send all requests to the same web server on another port. Like port 3000, if a request is captured on that port, a node app using express will forward the request to the app tier since the web server can speak to the app server.
I did setup a small proof of concept doing this. The web server serves the content, then I have another express app setup to listen on port 3000, the client sends the request on port 3000 and then it just sends the exact same thing back to the app server.
This is my current setup with the web servers hosting two servers. One to serve the frontend on port 80 and one to receive requests on port 3000. The server listening on port 3000 forwards all requests to the app server ALB(It's basically a copy of all the same routes on the app server but it just forwards the requests instead of performing an action). But is there a way to not have this extra hop in the middle? Get rid of the additional server that is listening on 3000 without exposing the internal ALB?
To separate your web servers and application servers, you can use a VPC with public and private subnets. In fact, this is such a common scenario that Amazon has already provided us with documentation.
As for a "better way to do this," I assume you mean security. Here are some options:
You can (and should) run host based firewalls such as IP tables on your hosts.
AWS also provides a variety of options.
You can use Security Groups, which are statefull firewalls for your hosts
You can also use Network Access Control Lists (ACLs), which are stateless firewalls used to control traffic in and out of subnets.
AWS would also argue that many shops can improve their security posture by using managed services, so that all of the patching and maintenance handled by AWS. For example, static content could be hosted on Amazon S3, with dynamic content provided by microservices leveraging API Gateway. Finally, from a security perspective AWS provides services like Trusted Advisor, which can help you find and fix common security misconfigurations.

EC2 Multi App Instance - Some Ports not Reachable

I have started an AWS EC2 (UBUNTU 18 AMI) instance running three apps:
Web server on port 80
REST API on port 8786
DB on port X
I am able to
SSH into the instance
Reach the website via browser on port 80.
Reach the REST API from within the SSH session.
I am unable to
Reach the REST API via AJAX from the browser (tried postman as well).
I have
Configured the Security Group to receive inbound connections from all sources on 8786
verified that iptable is not loaded
Tried reaching the website from a mobile network - to no avail.
Swapped the ports between the Web Server and the REST API - which resulted in being able to access the API via the browser and postman.
Verified that the API is bound to 0.0.0.0 - not to localhost.
This smells like an EC2 issue, but I have no idea what to do.
Help would be much appreaciated.
As it turns out, 8786 is a reserved port, and should not be used. Issues were resolved when I changed to 8080, which I should have done in the very beginning.

How to whitelist Swisscom PaaS domain/api for remote access

I am about to create a small web application which I might deploy to Swisscom PaaS as well.
This should be able to call a Rest API on a remote server.
Remote Server requires all incoming requests to be whitelisted by IP/domain.
Is it enough to whitelist *.scapp.io or myapp.scapp.io when myapp deployed on Swisscom PaaS should be able to access remote API or is it required to use a different domain/IP due to the nature how PaaS is setup and running?
You can find out the source IP of Swisscom Application Cloud Public offering with those commands:
$ cf ssh APP_NAME
$ curl ifconfig.co
194.209.246.112
# example for developer.swisscom.com
This IP doesn't resolve to any domain name. This IP may change / not yet stable. Since the beginning of Application Cloud (more than 3 years the IP didn't change). This is the outgoing IP to whitelist in remote app.
You raised a very good point about stable IP address pool. We consider that and will document the IPs when implemented.

Connection Between Heroku and AWS EC2 Server

I have nodejs running app at Heroku and I created MongoDB server at EC2 and opened port 27017. I am using public ip address to connection and it makes delay. How can I fix that problem? I want to connect that two over internal network.
Because they're on different servers, there is bound to be a delay. You can however reduce the delay by putting the servers in the same country (region)
Heroku servers are located mostly in US(Virginia), you can change your AWS servers to be near the same as well. Do check where the server's are located. A better alternative would be to use the mongodb addon: https://elements.heroku.com/addons/mongolab