Wild card SSL not working on subdomain pointed to different server - amazon-web-services

This could be possibly a duplicate question, but I've tried every solution I found and nothing worked. On main domain, I've successfully installed SSL and it is working fine. I need to install the same wild card SSL on other two instances which are using for subdomain.
The overall structure I've setup so far is as follows -
Cloudflare is using for CDN where I've created A record for all 3 instances. One for main domain and other 2 for subdomains.
Created 3 instances (Ubuntu 18.04 + Apache) on AWS EC2
When I am hitting subdomain in browser, it is showing lock sign but with Error 521 : Web server is down
but When I am trying it with default Public DNS, it is showing my page without any error.
Please suggest what is missing here. Thanks much!!

The 521 error from CloudFlare indicates that it is unable to speak to your host on that port.
Error 521 occurs when the origin web server refuses connections from Cloudflare. Security solutions at your origin may block legitimate connections from certain Cloudflare IP addresses.
The two most common causes of 521 errors are:
Offlined origin web server application
Blocked Cloudflare requests
Please check the following:
The EC2 security group is allowing inbound access on both port 80/443 (this cannot be locked down to your IP address).
If a NACL is in place (which is not the default one) ensure that both the communication ports (80/443) and the ephemeral ports are open.
Ensure that the servers are listening on both port 80/443.
It is important to identify whether CloudFlare it attempting to connect to HTTP or HTTPS, it can support both of these models based on the configuration.
If you're still stuck after these points you can attempt to validate the requests going to your server using VPC Flow Logs.

Finally, this answer gave me a hint How to install third-party SSL Certificate with AWS EC2 Instance (Ubuntu AMI)? Will it cost one-time or monthly basis?
And I resolved this error as follow -
1. Downloaded the certificate files from primary server
2. Uploaded the same certificate files to the secondary server where the subdomain is pointed
3. then edit /etc/apache2/sites-available/default-ssl.conf file on secondary server, search for "SSLCertificate" and change the following lines
4. Enable the SSL configuration, and restart the webserver.
ln -s /etc/apache2/sites-available/default-ssl.conf /etc/apache2/sites-enabled/
apachectl configtest
apachectl graceful

Related

Unable to open Public IPv4 DNS in AWS EC2 - Linux instance

I have a Spring boot project which I want to host on an AWS-EC2 instance. I was able to create its image using Git-hub, Jenkin and docker. I was also able to successfully pull and run this image in the Linux console of my AWS-EC2 instance.
According the tutorial I was following I should have been able to open the project now using the public IPv4 DNS but the response I got was that it refuse to connect.
I know that this usually has to do with Inbound rules so I added a rule to allow all traffic but it didn't help.
For anyone who wants to know:
Git-hub repository: https://github.com/SalahuddinShayan/telecom
Docker-Hub repository: https://hub.docker.com/repository/docker/salahuddinshayan/telecom
Command I used to run the image in AWS:
docker run -p8081:8081 --name final-app --link docker-mysql:mysql salahuddinshayan/telecom
Security Groups:
Networking Details:
Here is the Error:
I am completely stumped by it. Does anyone an idea on what to do to fix this?
Please check if your client is calling the right protocol, e.g. http vs https.
You are transmitting on port 8081. http://3.110.29.193:8081/ works fine from the EC2 side. 404 status is raised, so this is a client side error, not a server side error.
It means that no firewall is blocking traffic and a process (your app) was found that listens on IP:Port that you require. The problem is that the process it encountered (your app) is sending only a WhiteLabel Error Page, which is a generic Spring Boot error page that is displayed when no custom error page is present. So the issue is with the Spring app itself and not with EC2 or with connection. In other words: the traffic can reach your Spring app, but your Spring app has nothing to say in response.
As a side note, after deploying your app I would advise to refine the inbound traffic rules to allow only the traffic you want. There is no need of allowing all traffic on all ports.

Can't run GCP VM on public IP with SSH

I am setting up a Virtual Machine node.js server at Google Cloud Platform. I have set up SSH keys so that I can log into my VM. I can successfully log into my VM using SSH-in-browser and start my server.
I can't access my public IP address through Chrome. I get this message:
This site can’t provide a secure connection.
When I try to connect to the IP within SSH-in-browser, I get the following:
$ curl -vso /dev/null --connect-timeout 5 34.68.254.120:8080
* Trying 34.68.254.120:8080...
* connect to 34.68.254.120 port 8080 failed: Connection refused
* Failed to connect to 34.68.254.120 port 8080: Connection refused
* Closing connection 0
I'm new at this. Any ideas would be appreciated. Thanks!
Edit1: Some more details --
Linux VM
port 8080 ingress is open on the firewall
I'm using OSLogin (`enable-oslogin = TRUE' 'enable-oslogin-sk = FALSE')
I can successfully log into console with both SSH-in-Browser and PuTTY, and I can start my server on port 8080
In both, I get the error above when I try to connect to the IP address
EDIT:
Follow below steps to fix “This Site Can’t Provide a Secure Connection” Error :
This error typically indicates a problem with either your browser’s configuration or the SSL certificate on your site.
1) Your local environment doesn’t have an SSL certificate.
2) Outdated SSL caches in the browser : (This is one of the more popular causes. Web browsers store SSL certificates in a cache, much
like other data. This means they don’t have to verify the certificate
every time you visit a site, which speeds up browsing. However, if
your SSL certificate changes and the browser is still loading an
older, cached version, it can cause this error to pop up).
3) Incorrect time and date settings on your computer.
4) Rogue browser extensions.
5) Overzealous antivirus software.
6) An invalid or expired SSL certificate.
If your firewall rules prevent external access:
Check your firewall rules with the following command: gcloud compute firewall-rules list with this, you can review the VPC where
the VM instance was migrated; and if it has allowed the Ingress TCP:
22 Port.
If this firewall rule is missing, you can add the firewall rule in the GCP console -> VPC Networks ->select your VPC network _Click on
the firewall rules to double check that the tcp: 22 port is allowed.
If the issue still is ongoing after checking the firewall rules, you
can follow this guide to start troubleshooting SSH connection.

How do I access the web GUI of my NiFi instance running on an AWS machine?

I am trying to run NiFi on an AWS machine and access the web GUI on my local computer.
I have followed guides such as: https://community.hortonworks.com/articles/47778/hdf-installation-on-ec2.html but whenever I type in the DNS:8080/nifi into my web browser I get a "connection refused" or timed out message.
I have created an AWS Red Hat machine, installed NiFi + java, and edited the nifi.properties file such that it is now:
# Site to Site properties
nifi.remote.input.host=ec2-34-224-216-146.compute-1.amazonaws.com
nifi.remote.input.secure=false
nifi.remote.input.socket.port=
I have tried leaving the port number blank, as well as other numbers such as: nifi.remote.input.socket.port=8082
but neither work when I enter
ec2-34-224-216-146.compute-1.amazonaws.com:8080/nifi into my browser.
I have also tried adding the domain to my local computer's /etc/hosts file in the form of the Public DNS as well as IPv4. I have also configured the security group on AWS such that I have a "Custom TCP Rule" with the port range 8081, 8082, etc. for the respective ports I have attempted.
I am not sure what I am doing wrong or if I am missing a step. Any help is appreciated.
The properties you are configuring are for site-to-site connections and are not related to the UI. These would be used if another NiFi or MiNiFi was making a site-to-site connection to your NiFi instance.
To control the UI you should be configuring:
nifi.web.http.host=
nifi.web.http.port=8080
nifi.web.https.host=
nifi.web.https.port=

Enabling SSL on Rails 4 with AWS Load Balancer Ngnix and Puma

I have tried unsuccessfully to configure SSL for my project.
My AWS load balancer is configured correctly and accepts the certificate keys. I have configured the listeners to route both port 80 traffic and port 443 traffic to my port 80 on the instance.
I would imagine that no further modification is necessary on the instance (Nginx and Puma) since everything is routed to port 80 on the instance. I have seen examples where the certificate is installed on the instances but I understand the load balancer is the SSL termination point so this is not necessary.
When accessing via http://www.example.com eveything works fine. However, accessing via https://www.example.com times out.
I would appreciate some help with the proper high-level setup.
Edit: I have not received any response to this question. I assume it is too general?
I would appreciate confirmation that the high level reasoning I am using is the right one. I should install the certificate in the load balancer only and configure the load balancer to accept connections on the 443 port, BUT route everything on the 80 port internally to the web server instances.
I just stumbled over this question as I had the same problem: All requests to https://myapp.com timed-out and I could not figure out why. Here in short how I could achieve (forced) HTTPS in a Rails app on AWS:
My app:
Rails 5 with enabled config.force_ssl = true (production.rb) - so all connections coming from HTTP will get re-routed to HTTPS in the Rails App. No need to set-up difficult nginx rules. The same app used the gem 'rack-ssl-enforcer' as it was on Rails 4.2.
Side note: AWS LoadBalancers used in the past HTTP GET requests to check the health of the instances (today they support HTTPS). Therefore exception rules had to be defined for the SSL enforcement: Rails 5: config.ssl_options = { redirect: { exclude: -> request { request.path =~ /health-check/ } } } (in production.rb) with a respective route to a controller in the Rails App.
Side note to side note: In Rails 5, the initializer new_framework_defaults.rb has already defined "ssl_options". Make sure to deactivate this before using the "ssl_options" rule in production.rb.
AWS:
Elastic Beanstalk set-up on AWS with a valid cert on the Load Balancer using two Listener rules:
HTTP 80 requests on LB get directed to HTTP 80 on the instances
HTTPS 443 requests on LB get directed to HTTP 80 on the instances (here the certificate needs to be applied)
You see that the Load Balancer is the termination point of SSL. All requests coming from HTTP will go through the LB and will then be redirected by the Rails App to HTTPS automatically.
The thing no one tells you
With this in place, the HTTPS request will still time-out (here I spent days! to figure out why). In the end it was an extremely simple issue: The Security Group of the LoadBalancer (in AWS console -> EC2 -> Security Groups) only accepts requests on Port 80 (HTTP) -> Just activate Port 443 (HTTPS) as well. It should then work (at least it did for me).
I don't know if you managed your problem but for whoever may find this question here is what I did to get it working.
I've been all day reading and found a mix of two configurations that at this moment are working
Basically you need to configure nginx to redirect to https, but some of the recommended configurations do nothing to the nginx config.
Basically I'm using this gist configuration:
https://gist.github.com/petelacey/e35c98f9a35063a89fa9
But from this configuration I added the command to restart the nginx server:
https://gist.github.com/KeithP/f8534c04d20c2b4e4b1d
My take on this is that when the eb deploy process manages to copy the config files nginx has already started(?) making those changes useless. Hence the need to manually restarted, if some has a better approach let us know
Michael Fehr's answer worked and should be the accepted answer. I had the same problem, adding the config.force_ssl = true is what I missed. With the remark that you don't need to add the ebs configuration file they say you have to add if you are using the load balancer. That can be misleading and they do not specify it in the docs

Connection getting refused to socket.io server on Amazon EC2

I have set up a a micro EC2 instance on AWS. Currently, I am using the free tier in Oregon. There are two problems which I am facing.
When I try to SSH the instance using the public DNS, it says host does not exist but when I try conencting it using the public IP, it connects to it. What setting is needed to use the public DNS ?
I have opened the SSH client using the IP address. I want to set up my application which needs Node.js and MongoDB. I installed Node.js using this
Next I installed MongoDB using this
Then I connected to my instance using Filezilla and uploaded my code to it. I then start my node application which uses socket.io.
When I try to connect to socket.io server using web browser, I get a message which says connection refused "error 111". I have opened TCP port 80 in instance's security groups. In iptables, I have forwarded port 80 to 8080, but still it does not work. I have also checked that the firewall is disabled in ec2. Kindly help me to resolve this issue.
Did you check if all of the necessary ports are open on Amazon Security Policy?
What you can do is to allow all traffic on Amazon Security Policy for test and see if the connection goes well or not.
You might also check if you need access DB from outside. In that case, you also have to open the mongodb port and setup mongodb correctly as well.
Other tools that might useful to test firewall and connection issue will be tcpdump and syslog file
For the dns issue, did you try to nslookup on that name and see if the IP shown matches your server IP?
As Amazon gives a long DNS hostname for the server, I always use my own domain name. It's much easier.
example : ec2.domainname.com, which points to the Amazon IP address
Hope that help.
My problem is resolved now..
For the DNS issue, earlier I needed proxy to access internet, so I guess the DNS name was not getting resolved. When I tried using proxy free internet, I was able to ssh using public DNS.
And regarding connection to socket.io, I used port 8080 instead of 80 and used "sudo node main.js" to run my node file. Now I am able to connect to the socket.io server and MongoDB.
Another thing which I want to ask is that would running the node file with sudo rights create some security issue ?
Thanks for the answer! That also worked for me. I had the same problem trying to connect through sockets (http://myipaddress:3000) to a node.js server, i tried opening ports on the actual ec2 instance and disabling the firewall through SSH but nothing worked. Had to go to Security Groups on the ec2 console and open a new inbound tcp rule enabling that port