Amazon AWS EC2 disable access to installed application using public IP4 - amazon-web-services

I created a new instance of amazon aws EC2.
I installed an apache2 web server, with wordpress app.
I configured my domain name, and added a load balancer to redirect to https using an amazon public ssl certificate.
All work perfectly and I can access to my web site using https://mysiteweb.com/
Even when I access to my app site http://mysiteweb.com, the redirection is performed to https://.
The prorblem is I can still access to my app using the EC2 public IP4: http://XX.XXX.XXX.XX and here no redirection id performed.
Same think with the DNS public (IP4): ec2-XX-XX-XX-XX.compute-1.amazonaws.com, no redirection here also.
How can resolve this.
Thank you.

You should update the security group of your instance to only allow inbound access on port 80/443 from the security group attached to the load balancer.
Your load balancer has at least one security group attached such as that below
sg-123456
INBOUND RULES
| Protocol | Port | Source |
--------------------------------
| TCP | 80 | 0.0.0.0/0 |
| TCP | 443 | 0.0.0.0/0 |
You would then update the instance security group to match the below example here sg-123456 is the load balancers security group.
sg-123457
INBOUND RULES
| Protocol | Port | Source |
--------------------------------
| TCP | 80 | sg-123456 |
| TCP | 443 | sg-123456 |
By doing this you prevent anything other than the load balancer performing any HTTP requests on your instance.
You can further increase security of your instance and prevent this scenario by moving your instance into a private subnet so that no one is able to connect to it publicly.
In addition configure the web server you're running to redirect any host name that is not the target hostnames to be the hostname you're expecting.
This can be accomplished by adding a default VHOST that catches any requests, this will be the first that you have added in web servers such as Apache and Nginx. Then add an additional vhost with the ServerAlias set to the domain you're anticipating the user landing on.
By doing this it prevents crawls on your load balancer returning your site.

The issue could be rectified, by configuring the security group (SG) for your EC2 instance should be configured to allowed incoming connections from the SG of your load balancer:
Security groups for your Application Load Balancer
Security groups for instances in a VPC

Related

Is there an AWS security group rule specifically allowing access to AWS KMS?

I have a node.js server running in EC2 that uses AWS KMS to encrypt/decrypt data.
I can successfully use the aws-sdk to carry out my tasks using
const AWS = require('aws-sdk');
const kms = new AWS.KMS();
kms.decrypt( ... )
I now want to lock down my infrastructure using security groups.
I cannot work out what rule to use to allow this server to access the AWS KMS resources without having an outgoing rule allowing all traffic, anywhere:
Outbound rules:
Type | Protocol | Port range | Destination
All traffic | All | All | 0.0.0.0/0
All traffic | All | All | ::/0
This may be excessive, but is there a rule I can use that specifically allows outbound access to the AWS KMS service?
A CIDR block, a security group ID or a prefix list has to be specified, so I cannot use the endpoint hostname or href from the KMS service object:
kms Service {
...
isGlobalEndpoint: false,
endpoint: Endpoint {
protocol: 'https:',
host: 'kms.eu-west-2.amazonaws.com',
port: 443,
hostname: 'kms.eu-west-2.amazonaws.com',
pathname: '/',
path: '/',
href: 'https://kms.eu-west-2.amazonaws.com/'
},
...
}
Inspecting the Remote Address of https://kms.eu-west-2.amazonaws.com/ provides an IP address of 52.94.48.24:443. Using this in a specific rule works intermitently, but I can find no AWS documentation suggesting that this IP address is fixed. I would imagine it is not.
Type | Protocol | Port range | Destination
HTTPS | TCP | 443 | 52.94.48.24/32
Any guidance is most appreciated!
Thanks, A.
I assume that by locking your infrastructure we understand that you put your EC2 instances is a private subnet. In this case you should create a VPC Endpoint for the KMS service.
A VPC endpoint will have a network interface in your subnets, which will provide a private IP address for every subnet and also potential private DNS hostname (if the VPC has this property enabled).
Moreover you can allow traffic between EC2 instances and VPC endpoint based on security group (no need to specify IPs).

How to expose an application running on IPv6 protocol with network load balancer on AWS

I have an application which is running on a port 7071 and when I do:-
netstat -aon | grep 7071
I get:-
tcp6 :::7071 3204/java (snipped)
I am able to create a target group with tcp:7071 and host returns healthy and I have created a dualstack NLB(internet-facing) for this.
Still when I try to access this inside EC2 instance of another AWS account, the connection times out:-
telnet dualstack.name.elb.eu-west-2.amazonaws.com 80
The security group allows all traffic at 80 including ipv6.
We have four AWS accounts which serve different environments(dev, Test, Beta, Prod).
the application that is running can have only one instance of it due to license restrictions. so we need to expose this app to other AWS accounts and that's why this setup(which is not working).
Please help.

AWS Windows Server Hosting: Ports other than 80 not working

I am not able to access ports other than port 80 in my AWS hosting.
I hosted my App in http://18.222.65.31. This port is working as expected, but I hosted another app in http://18.222.65.31:81/api/values, which is not working outside the hosted instance of AWS.
What I tried so far:
I added firewall Inbound Rule for port 81.
Added Custom TCP rule for the security group of the instance from AWS Console.
Is there something I am missing ?
UPDATE:
1. Instance Detail:
2. Security Group detail:
3. VM Firewall Advanced Settings(Inbound):

How can I troubleshoot an AWS Application Load Balancer giving 504, while the EC2 instance behind it gives 200?

I have an EC2 instance with a few applications successfully deployed onto it, listening for connections on ports 3000/3001/3002. I can correctly load a web page from it by connecting to its public DNS or public IP on the given port. I.e. curl http://<ec2-ip-address>:3000 works. So I know that the apps are running, and I know that the port bindings/firewall rules/EC2 security groups are all set up correctly to receive connections from the outside world.
I also have an Application Load Balancer, which is supposed to route traffic to the 3 apps depending on the host name, but it always gives me "504 Gateway Time-out". I've checked all the settings but I can't see what's wrong and I'm not really sure how to troubleshoot it from here.
The ALB has a single HTTPS/443 listener, with a cert that's valid for mydomain.com, app1.mydomain.com, app2.mydomain.com, app2.mydomain.com.
The listener has 3 rules, plus the default rule:
Host == app1.mydomain.com => app1-target-group
Host == app2.mydomain.com => app2-target-group
Host == app3.mydomain.com => app3-target-group
Default action (last resort) => default-target-group
Each target group contains only the single EC2 instance, over HTTP, with the following ports:
app1-target-group: 3000
app2-target-group: 3001
app3-target-group: 3002
default-target-group: 3000
Given that I can access the app directly, I'm sure it must be a problem with the way I've configured the ALB/listener/target groups. But the 504 doesn't give me much to go on.
I've tried to turn on access logs to an S3 bucket, but it doesn't seem to be writing anything there. There's a single object called ELBAccessLogTestFile, and no actual logs in the bucket.
EDIT: Some more information... I actually have nginx installed on the EC2 instance, which is where I was previously doing the SSL termination and hostname-to-port mapping/routing. If I change the default-target-group above to point to port 443 over HTTPS, then it works!
So for some reason, routing traffic
- from the ALB to the EC2 instance over HTTPS on port 443 -> OK!
- from the ALB to the EC2 instance over HTTP on port 3000 -> Broken!
But again, I can hit the instance directly on HTTP/3000 from my laptop.
Communication between resources in the same security group is not open by default. Security group membership alone does not provide special access. You still need to open the ports in the security group to allow other resources in the security group to access those ports. You can specify the security group ID in the rule's source field if you don't want to open it up beyond the resources in the security group.

AWS Elastic Load Balancer connection timed out

I set up a load balancer in a Availability Zone and added some EC2 instances in the same zone. The health check works fine. Now I tried to access the load balancer using its host name from outside. Even though I can access individual hosts behind the load balancer without any issue, I got a connection time-out error if I tried to connect to the load balancer:
$ wget -O test "http://xxxx.us-west-1.elb.amazonaws.com:8080/"
--2014-04-01 21:26:59-- http://xxxx.us-west-1.elb.amazonaws.com:8080/
Resolving xxxx.us-west-1.elb.amazonaws.com... 11.111.111.11
Connecting to xxxx.us-west-1.elb.amazonaws.com|11.111.111.11|:8080... failed: Connection timed out.
Listener configuration is like this (I don't know how to format this better):
Load Balancer Protocol | Load Balancer Port | Instance Protocol | Instance Port | Cipher | SSL Certificate
HTTP 8080 HTTP 8080 N/A N/A
Any insight/comment would be appreciated.
It turned out that it was because I set it up as VPC Load Balancer. In that case I have to access it through a private IP address :)