Running Jenkins and Spring-boot on single EC2 instance - amazon-web-services

I have a spring-boot application running on EC2 instance and it's publicly accessible from an elastic IP say 123.456.78.90 with the help of apache httpd server. I have given the following virtual host entry in httpd.conf
<VirtualHost *:80>
ProxyPreserveHost On
ProxyRequests Off
ServerName 123.456.78.90
ProxyPass / http://127.0.0.1:8080/
ProxyPassReverse / http://127.0.0.1:8080/
</VirtualHost>
Now, I have installed Jenkins on the same EC2 instance and want it to be accessible from my elastic IP 123.456.78.90 but maybe by specifying a different port like 9090 so when I give 123.456.78.90:9090 it takes me to Jenkins but when I give 123.456.78.90 it takes me to my spring-boot application. I am not sure what is the best way to configure it. For setting up Jenkins I tried the following virtual host entry in my httpd.conf file but its not working.
<VirtualHost *:9090>
ProxyPreserveHost On
ProxyRequests Off
ServerName 123.456.78.90:9090
ProxyPass / http://127.0.0.1:8080/
ProxyPassReverse / http://127.0.0.1:8080/
</VirtualHost>
I would appreciate if I am pointed in the right direction.
UPDATE: I have the simple rule for directing the inbound traffic over http

Why not just use the port directly in jenkins, i.e. 8080 instead of routing it through apache?
Anyways I think the problem is due to the lack of a listening directive in apache for port 9090
See https://httpd.apache.org/docs/2.4/bind.html

Have you tried to follow the manual on https://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenkins+on+Ubuntu, the Setting up an Apache Proxy for port 80 -> 8080 section. I think just change the 80 from 9090 and then the manual might work for you.
Also if you are using EC2, you may have to do some security configuration about the port that can be accessed from outside network in you AWS console

Related

How to redirect traffic from port 80 to 8080 - Load balancer - Cloud DNS

I have a requirement similar to this post,
Google cloud load balancer port 80, to VM instances serving port 9000
I like one of the answers (not the accepted), but how to do it ? or is there an alternate way ?
" If your app is HTTP-based (looks like it), then please have a look
at the new HTTP load balancing announced in June. It can take incoming
traffic at port 80 and forward to a user-specified port (eg. port
9000) on the backend. The doc link for the command is here:
https://developers.google.com/compute/docs/load-balancing/http/backend-service#creating_a_backend_service"
I dont want to create statis IP after static IP and loose track
Scenario:
A Compute Engine with an application running on port 8080 or 8043 (firewall open for 8080 and 8443 , has static IP )
Now I want to hook it a domain.
Problem:
I have to specify port number - like http://mywebsite:8080
Goal: Allow use like http://mywebsite
Please can I ask how Cloud DNS and Load Balancer work. Are both needed for my scenario? help me connect the dots.
Thanks
Note: Application works on 8080 only (wont run on 80)
DNS knows nothing about port numbers (except for special record types). DNS is a hostname to IP address translation service.
You can either use a proxy/load-balancer to proxy port 80 to 8080 or configure your app to run on port 80.
Port 80 requires permission to use. For Linux this means configuring your application to run with system privileges or starting the application with sudo.
Most applications that run on non-standard ports have a web server in front of them such as Apache or Nginx. The web server proxies port 80 to 8080 and provides a more resilient Internet facing service.
I don't want to create static IP after static IP and lose track
Unfortunately, you will need to manage your services and their resources. If you deploy a load balancer, then you can usually use private IP addresses for the compute instances. Only the load balancer requires a public IP address. The load balancer will proxy port 80 to 8080.
However, assuming that your requirements are small, you can assign a public IP address to the instance, install Apache or Nginx, and run your application on port 8080.
Today, it is rare that Internet-facing web services do not support HTTPS (port 443). Using a load balancer simplifies configuring TLS and certificate management. You can also configure TLS in Apache/Nginx and Let's Encrypt. That removes the requirement that your app supports TLS directly on something like port 8443.
I found this article and it works - https://eladnava.com/binding-nodejs-port-80-using-nginx/
Steps: (sudo apt-get update)
sudo apt-get install nginx
remove default
sudo rm /etc/nginx/sites-enabled/default
create new - node folder
sudo nano /etc/nginx/sites-available/node
<<<<<< add this to the file update domain name , port of your app ...8080 or other>>>>>
server {
listen 80;
server_name somedomain.co.uk;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://127.0.0.1:8080";
}
}
create a symbolic link:
sudo ln -s /etc/nginx/sites-available/node /etc/nginx/sites-enabled/node
Restart nginx
sudo service nginx restart
Credits to original author - Elad Nava

How to secure a fastapi app hosted on EC2 instance with a self-signed SSL certificate?

I have a Fastapi app hosted on EC2 instance using docker-compose.yml. Currently, the app is not secured (HTTP & not HTTPS). I am trying to secure the app via a self-signed cert by following the tutorial Deploy your FastAPI API to AWS EC2 using Nginx.
I have the following in the fastapi_nginx file in the /etc/nginx/sites-enabled/
server {
listen 80;
listen 443 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
server_name x.xx.xxx.xxx;
location / {
proxy_pass http://0.0.0.0:8000/docs;
}
}
But it doesn't seem to work. When I do https://x.xx.xxx.xxx, I get the error:
This page isn’t working
x.xx.xxx.xxx didn’t send any data.
ERR_EMPTY_RESPONSE
But http://x.xx.xxx.xxx is working like before.
I am not sure if I am missing anything or making any mistakes.
P.S.: I also tried doing the steps mentioned in the article here and still it wasn't working.
Also, the inbound in security groups
You are redirecting https traffic to /docs, have you tried proxy_pass http://localhost:8000;?
Also 0.0.0.0 is not always a good solution, it means to all IP addresses on the local machine as referred here. Try 127.0.0.1 or localhost.
You can check any errors in /var/log/nginx/error.log.
Finally, see if your security group and route table allow the traffic.
Since you make use of the docker-compose.yml. You can probably configure as follows:
Extend your docker-compose.yml having nginx as well.
In the below mounts the nginx.conf is the file you have defined locally, certs are certificates. Also, it would be best to keep in the same network as per the fastapi app so that they communicate.
nginx.conf to be modified is to point to the Docker service name of the fastapi app:
location / {
proxy_pass http://my-fastapi-app:8000/docs;
}
An example snippet below:
...
networks:
app_net:
services:
my-fastapi-app:
...
networks:
- app_net
nginx:
image: 'bitnami/nginx:1.14.2'
ports:
- '80:8080'
- '443:8443'
volumes:
- ./nginx.conf:/opt/bitnami/nginx/conf/nginx.conf:ro
- ./certs:/opt/bitnami/nginx/certs/:ro
- ./tmp/:/opt/bitnami/nginx/tmp/:rw
networks:
- app_net
Additionally, I could also suggest looking into caddy. The certification process and renewal is automatically done.

Configure Apache reverse proxy to a AWS ALB on port 443

I've setup a Apache reverse proxy(which receives traffic from outside world via a firewall) with the below configuration:
<VirtualHost *:443>
ServerName xyz.example.com
ProxyRequests Off
ProxyPass / https://internal-voyager-dev-1960104633.us-east-1.elb.amazonaws.com/
ProxyPassReverse / https://internal-voyager-dev-1960104633.us-east-1.elb.amazonaws.com
SSLProxyEngine on
SSLProxyVerify none
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLEngine on
SSLOptions +StrictRequire
SSLProtocol All -SSLv3 -SSLv2 -TLSv1 -TLSv1.1
.
.
>
This reverse proxy is pointing to a AWS ALB on listener port 443. So the ALB then processes the request based on the rule where HOST(xyz.example.com) is mapped to a target group. But This is not working, I am getting a 502 bad gateway error.
.
If I make config changes like pointing reverse proxy to a http://alb-cname and use the listener port 80 of AWS ALB then I am able to bring up the application but as we use a rails application we get the error saying HTTP Origin header didn't match request.base_url
Appreciate any ideas as to how I can solve this issue.

Cannot access airflow web server via AWS load balancer HTTPS because airflow redirects me to HTTP

I have an airflow web server configured at EC2, it listens at port 8080.
I have an AWS ALB(application load balancer) in front of the EC2, listen at https 80 (facing internet) and instance target port is facing http 8080.
I cannot surf https://< airflow link > from browser because the airflow web server redirects me to http : //< airflow link >/admin, which the ALB does not listen at.
If I surf https://< airflow link > /admin/airflow/login?next=%2Fadmin%2F from browser, then I see the login page because this link does not redirect me.
My question is how to change airflow so that when surfing https://< airflow link > , airflow web server will redirect me to https:..., not http://.....
so that AWS ALB can process the request.
I tried to change base_url of airflow.cfg from http://localhost:8080 to https://localhost:8080, according to the below answer, but I do not see any difference with my change....
Anyway, how to access https://< airflow link > from ALB?
Since they're using Gunicorn - you can configure the forwarded_allow_ips value as an evironment variable instead of having to use an intermediary proxy like Nginx.
In my case I just set FORWARDED_ALLOW_IPS = * and it's working perfectly fine.
In ECS you can set this in the webserver task configuration if you're using one docker image for all the Airflow tasks (webserver, scheduler, worker, etc.).
Finally I found a solution myself.
I introduced a nginx reverse proxy between ALB and airflow web server: ie.
https request ->ALB:443 ->nginx proxy: 80 ->web server:8080. I make the nginx proxy tell the airflow web server that the original request is https not http by adding a http header "X-Forwarded-Proto https".
The nginx server is co-located with the web server. and I set the config of it as /etc/nginx/sites-enabled/vhost1.conf (see below). Besides, I deletes the /etc/nginx/sites-enabled/default config file.
server {
listen 80;
server_name <domain>;
index index.html index.htm;
location / {
proxy_pass_header Authorization;
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
}
}
User user389955 own solution is probably the best approach, but for anyone looking for a quick fix (or want a better idea on what is going on), this seems to be the culprit.
In the following file (python distro may differ):
/usr/local/lib/python3.5/dist-packages/gunicorn/config.py
The following section prevents forwarded for headers from anything other than local
class ForwardedAllowIPS(Setting):
name = "forwarded_allow_ips"
section = "Server Mechanics"
cli = ["--forwarded-allow-ips"]
meta = "STRING"
validator = validate_string_to_list
default = os.environ.get("FORWARDED_ALLOW_IPS", "127.0.0.1")
desc = """\
Front-end's IPs from which allowed to handle set secure headers.
(comma separate).
Set to ``*`` to disable checking of Front-end IPs (useful for setups
where you don't know in advance the IP address of Front-end, but
you still trust the environment).
By default, the value of the ``FORWARDED_ALLOW_IPS`` environment
variable. If it is not defined, the default is ``"127.0.0.1"``.
"""
Changing from 127.0.0.1 to specific IP's or * if IP's unknown will do the trick.
At this point, I haven't found a way to set this parameter from within airflow config itself. If I find a way, will update my answer.
We solved this problem in my team by adding an HTTP listener to our ALB that redirects all HTTP traffic to HTTPS (so we have an HTTP listener AND an HTTPS listener). Our Airflow webserver tasks still listen on port 80 for HTTP traffic, but this HTTP traffic is only in our VPC so we don't care. The connection from browser to the load balancer is always HTTPS or HTTP that gets rerouted to HTTPS and that's what matters.
Here is the Terraform code for the new listener:
resource "aws_lb_listener" "alb_http" {
load_balancer_arn = aws_lb.lb.arn
port = 80
protocol = "HTTP"
default_action {
type = "redirect"
redirect {
port = "443"
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
Or if you're an AWS console kinda place here's how you set up the default action for the listener:
Console
I think you have everything working correctly. The redirect you are seeing is expected as the webserver is set to redirect from / to /admin. If you are using curl, you can pass the flag -L / --location to follow redirects and it should bring you to the list of DAGs.
Another good endpoint to test on is https://<airflow domain name>/health (with no trailing slash, or you'll get a 404!). It should return "The server is healthy!".
Be sure you have https:// in the base_url under the webserver section of your airflow config.
Digging into the gunicorn documentation: it seems to be possible to pass any command line argument (when gunicorn command is called) via the GUNICORN_CMD_ARGS environment variable.
So what I'm trying out is setting GUNICORN_CMD_ARGS=--forwarded-allow-ips=* since all the traffic will come to my instance from the AWS ALB... I guess the wildcard could be replaced with the actual IP of the ALB as seen by the instance, but that'll be next step...
Since I'm running on ECS, I'm passing it as:
- Name: GUNICORN_CMD_ARGS
Value: --forwarded-allow-ips=*
in the Environment of my task's container definition.
PS: from the doc, this possibility was added as of gunicorn 19.7; for comparison, Airflow 1.10.9 seems to be on gunicorn 19.10 so good to go with any (more or less) recent version of Airflow !
I encountered this issue too when using the official apache airflow helm chart (version 1.0.0).
Problem
Originally I had configured the webserver service with type LoadBalancer.
webserver:
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:1234512341234:certificate/231rc-r12c3h-1rch3-1rch3-rc1h3r-1r3ch
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
This resulted in the creation of a classic elastic load balancer.
This mostly worked but when I clicked on the airflow logo (which links to https://my-domain.com), I'd get redirected to http://my-domain.com/home which failed because the load balancer was configured to use HTTPS only.
Solution
I resolved this by installing the AWS Load Balancer Controller on my EKS cluster and then configuring ingress.
The ingress-related portion of the chart config looks like this:
ingress:
enabled: true
web:
host: my-airflow-address.com
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/subnets: subnet-01234,subnet-01235,subnet-01236
alb.ingress.kubernetes.io/scheme: internal # if in private subnets
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
webserver:
service:
type: NodePort
Notes
It might be possible to configure the webserver to use an ALB instead of classic ELB and configure it to handle the HTTP routing, but I have not tested it.

AWS ELB not populating x-forwarded-for header

We are using Amazon Elastic Load Balancer and have 2 apache servers behind it.
However, we are not able to get the X-Forwarded-Headers on the application side
I read a similar post, but could not find a solution to it
Amazon Elastic load balancer is not populating x-forwarded-proto header
This is how ELB listeners are configured
HTTP 80 HTTP 80 N/A N/A
TCP 443 TCP 443 N/A N/A
Should changing the 443 port to HTTPS(Secure HTTP) instead of TCP populate the headers
Other options are SSl(Secure TCP)
If this works, I would also like to know why and what makes the difference
Amazon now supports using a tcp header to pass the source along as discussed in this article.
Apache does not as time support proxy protocol natively. If you read the comments there are source patches to allow apache to handle it or you could switch to nginx.
I had the same request.
I have an AWS Load Balancer pointing to a Webserver on the port 80.
All the HTTPS request are resolved using an AWS SSL Certificate but my client asked me also to redirect all the 80 port request to the HTTPS.
I'm using an Apache server, so I needed to add the following lines to the Virtual Host config file (httpd.conf)
RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} !=https
RewriteRule ^/(.*)$ https://%{SERVER_NAME}/$1 [R=301,L]
Then I restarted the apache service and Woala!
Below is the Virtual host config, you will need to do the same for your subdomains, example www.yourdomain.com
<VirtualHost *:80>
RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} !=https
RewriteRule ^/(.*)$ https://%{SERVER_NAME}/$1 [R=301,L]
ServerAdmin webmaster#yourdomain.com
DocumentRoot "/apache2/htdocs/YourDomainFolderWeb"
ServerName yourdomain.com
ErrorLog "logs/yourdomain.com-error_log"
CustomLog "logs/yourdomain.com-access_log" common
</VirtualHost>
Hope it works.
More info at:
https://aws.amazon.com/premiumsupport/knowledge-center/redirect-http-https-elb/
Best