HA proxy can not access - cookies

I have configured HAproxy on a RedHat server. The server is up and running without any issue but i cannot access the server through my browser. I have open the firewall port for the bind address.
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2080/haproxy
My haproxy.cfg is as below:
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend http-in
bind *:80
default_backend servers
backend servers
option httpchk OPTIONS /
option forwardfor
stats enable
stats refresh 10s
stats hide-version
stats scope .
stats uri /admin?stats
stats realm Haproxy\ Statistics
stats auth admin:pass
cookie JSESSIONID prefix
server adempiere1 192.168.1.216:8085 cookie JSESSIONID_SERVER_1 check inter 5000
server adempiere2 192.168.1.25:8085 cookie JSESSIONID_SERVER_2 check inter 5000
any suggestion?

To view HAProxy stats on your browser, put these lines in your configuration file.
You will be able to see HAProxy at http://Hostname:9000
listen stats :9000
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /

global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
daemon
log global
mode http
option httplog
option dontlognull
option forwardfor
retries 1 #number of times it will try to know if system is up or down
option redispatch #if one system is down, it will redispatch to another system which is up.
maxconn 2000
contimeout 5 #you can increase these numbers according to your configuration
clitimeout 50 #this is set to smaller number just for testing
srvtimeout 50 #so you can view right away the actual result
listen http-in IP_ADDRESS_OF_LOAD_BALANCER:PORT #example 192.168.1.1:8080
mode http
balance roundrobin
maxconn 10000
server adempiere1 192.168.1.216:8085 cookie JSESSIONID_SERVER_1 check inter 5000
server adempiere2 192.168.1.25:8085 cookie JSESSIONID_SERVER_2 check inter 5000
#
#try access from your browser the ip address with the port mentioned in the listen configuration #above.
#or try this is command line `/terminal: curl http://192.168.1.1:8080`

Related

Flask SQL-Alchemy outputting logs with every connection and disconnection

My logs are full of connection and disconnection alerts for my flask app, about 100 every hour:
2023-02-01 13:42:22.518 [mono] [ALERT] dpg-cf7rqrha6gdpab9c5vlg-a-68d546f45f-cfct2 dpg-cfct2 1 [63da5e2e.f4699-3] user=REDACTED,db=REDACTED,app=[unknown],client=::1,LOG: connection authorized: user=REDACTED database=REDACTED application_name=psql SSL enabled (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, bits=256)
2023-02-01 13:42:22.612 [mono] [ALERT] dpg-cf7rqrha6gdpab9c5vlg-a-68d546f45f-cfct2 dpg-cfct2 1 [63da5e2e.f4699-4] user=REDACTED,db=REDACTED,app=psql,client=::1,LOG: disconnection: session time: 0:00:00.099 user=REDACTED database=REDACTED host=::1 port=57230
I'm using Postgres and Flask-SQLAlchemy. I added in the config this line that I thought was meant to fix this:
SQLALCHEMY_ECHO = False
However I continue to get these logs. Is there a way I can strip out these connection and disconnection logs from being output so that I can more easily see the more helpful/important log outputs.

Slack Bot deployed in Cloud Foundry returns 502 Bad Gateway errors

In Slack, I have set up an app with a slash command. The app works well when I use a local ngrok server.
However, when I deploy the app server to PCF, it is returning 502 errors:
[CELL/0] [OUT] Downloading droplet...
[CELL/SSHD/0] [OUT] Exit status 0
[APP/PROC/WEB/0] [OUT] Exit status 143
[CELL/0] [OUT] Cell e6cf018d-0bdd-41ca-8b70-bdc57f3080f1 destroying container for instance 28d594ba-c681-40dd-4514-99b6
[PROXY/0] [OUT] Exit status 137
[CELL/0] [OUT] Downloaded droplet (81.1M)
[CELL/0] [OUT] Cell e6cf018d-0bdd-41ca-8b70-bdc57f3080f1 successfully destroyed container for instance 28d594ba-c681-40dd-4514-99b6
[APP/PROC/WEB/0] [OUT] ⚡️ Bolt app is running! (development server)
[OUT] [APP ROUTE] - [2021-12-23T20:35:11.460507625Z] "POST /slack/events HTTP/1.1" 502 464 67 "-" "Slackbot 1.0 (+https://api.slack.com/robots)" "10.0.1.28:56002" "10.0.6.79:61006" x_forwarded_for:"3.91.15.163, 10.0.1.28" x_forwarded_proto:"https" vcap_request_id:"7fe6cea6-180a-4405-5e5e-6ba9d7b58a8f" response_time:0.003282 gorouter_time:0.000111 app_id:"f1ea0480-9c6c-42ac-a4b8-a5a4e8efe5f3" app_index:"0" instance_id:"f46918db-0b45-417c-7aac-bbf2" x_cf_routererror:"endpoint_failure (use of closed network connection)" x_b3_traceid:"31bf5c74ec6f92a20f0ecfca00e59007" x_b3_spanid:"31bf5c74ec6f92a20f0ecfca00e59007" x_b3_parentspanid:"-" b3:"31bf5c74ec6f92a20f0ecfca00e59007-31bf5c74ec6f92a20f0ecfca00e59007"
Besides endpoint_failure (use of closed network connection), I also see:
x_cf_routererror:"endpoint_failure (EOF (via idempotent request))"
x_cf_routererror:"endpoint_failure (EOF)"
In PCF, I created an https:// route for the app. This is the URL I put into my Slack App's "Redirect URLs" section as well as my Slash command URL.
In Slack, the URLs end with /slack/events
This configuration all works well locally, so I guess I missed a configuration point in PCF.
Manifest.yml:
applications:
- name: kafbot
buildpacks:
- https://github.com/starkandwayne/librdkafka-buildpack/releases/download/v1.8.2/librdkafka_buildpack-cached-cflinuxfs3-v1.8.2.zip
- https://github.com/cloudfoundry/python-buildpack/releases/download/v1.7.48/python-buildpack-cflinuxfs3-v1.7.48.zip
instances: 1
disk_quota: 2G
# health-check-type: process
memory: 4G
routes:
- route: "kafbot.apps.prod.fake_org.cloud"
env:
KAFKA_BROKER: 10.32.17.182:9092,10.32.17.183:9092,10.32.17.184:9092,10.32.17.185:9092
SLACK_BOT_TOKEN: ((slack_bot_token))
SLACK_SIGNING_SECRET: ((slack_signing_key))
command: python app.py
When x_cf_routererror says endpoint_failure it means that the application has not handled the request sent to it by Gorouter for some reason.
From there, you want to look at response_time. If the response time is high (typically the same value as the timeout, like 60s almost exactly), it means your application is not responding quickly enough. If the value is low, it could mean that there is a connection problem, like Gorouter tries to make a TCP connection and cannot.
Normally this shouldn't happen. The system has a health check in place that makes sure the application is up and listening for requests. If it's not, the application will not start correctly.
In this particular case, the manifest has health-check-type: process which is disabling the standard port-based health check and using a process-based health check. This allows the application to start up even if it's not on the right port. Thus when Gorouter sends a request to the application on the expected port, it cannot connect to the application's port. Side note: typically, you'd only use process-based health checks if your application is not listening for incoming requests.
The platform is going to pass in a $PORT env variable with a value in it (it is always 8080, but could change in the future). You need to make sure your app is listening on that port. Also, you want to listen on 0.0.0.0, not localhost or 127.0.0.1.
This should ensure that Gorouter can deliver requests to your application on the agreed-upon port.

GCP External HTTP(S) Load Balancer Returns 502: "backend_connection_closed_before_data_sent_to_client"

My HTTP(S) External Load Balancer on GCP occasionally returns response with error code 502.
And the reason for the response is as follows:
jsonPayload: {
#type: "type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry"
statusDetails: "backend_connection_closed_before_data_sent_to_client"
}
According to GCP documentation such response occurs because of the following reason:
The backend unexpectedly closed its connection to the load balancer
before the response was proxied to the client. This can happen if the
load balancer is sending traffic to another entity. The other entity
might be a third-party load balancer that has a TCP timeout that is
shorter than the external HTTP(S) load balancer's 10-minute
(600-second) timeout. The third-party load balancer might be running
on a VM instance. Manually setting the TCP timeout (keepalive) on the
target service to greater than 600 seconds might resolve the issue.
Reference.
In the backend of my load balancer I have a GCP VM that runs an HAProxy server (v1.8) with following configuration:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
pidfile /var/run/rh-haproxy18-haproxy.pid
user haproxy
group haproxy
daemon
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
spread-checks 21
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
# An alternative list with additional directives can be obtained from
# https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 10000
balance roundrobin
frontend http-80
bind *:80
mode http
option httplog
default_backend www-80
backend www-80
balance roundrobin
mode http
option httpchk /haproxy_status
http-check expect status 200
rspidel ^Server:.*
rspidel ^server:.*
rspidel ^x-envoy-upstream-service-time:.*
server backendnode1 node-1:80 check port 8080 fall 3 rise 2 inter 1597
server backendnode2 node-2:80 check port 8080 fall 3 rise 2 inter 1597
frontend health-80
bind *:8080
acl backend_dead nbsrv(www-80) lt 1
monitor-uri /haproxy_status
monitor fail if backend_dead
listen stats # Define a listen section called "stats"
bind :9000 # Listen on localhost:9000
mode http
stats enable # Enable stats page
stats hide-version # Hide HAProxy version
stats realm Haproxy\ Statistics # Title text for popup window
stats uri /haproxy_stats # Stats URI
stats auth haproxy:pass # Authentication credentials
#lastline
According to GCP documentation we can get rid of 502 errors by setting a TCP Keep-Alive value that is higher than 600 seconds (10 minutes).
They have suggested values for Apache and Nginx.
Web server software Parameter Default setting Recommended setting
Apache KeepAliveTimeout KeepAliveTimeout 5 KeepAliveTimeout 620
nginx keepalive_timeout keepalive_timeout 75s; keepalive_timeout 620s;
Reference.
I'm not sure what timeout values or what config should I change in my HAProxy configuration to set keepalive time more than 600s.
Is setting the timeout http-keep-alive more than 600 seconds do the trick?
You version of HAProxy should have keep-alive option enabled by default but I don't see the corresponding line in your config file;
to enable it you need to add the option http-keep-alive line in the defaults section so it will look like this:
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option redispatch
retries 3
option http-keep-alive
To check if it's working follow instructions from this anwer.
You may also find useful these threads on SO:
How to make HA Proxy keepalive
How to enable keep-alive in haproxy?

How DNS cache clears in dnsmasq

Does dns cache clears after the max-cache-ttl seconds even if receives negative response from the parent name server which defined in resolv-file=/etc/resolv.dnsmasq..?
# Server Configuration
listen-address=127.0.0.1
port=53
bind-interfaces
user=dnsmasq
group=dnsmasq
pid-file=/var/run/dnsmasq/dnsmasq.pid
# Name resolution options
resolv-file=/etc/resolv.dnsmasq
cache-size=1000
neg-ttl=2
max-cache-ttl=5
domain-needed
bogus-priv
cat /etc/hosts
127.0.0.1 localhost
cat /etc/resolv.conf
nameserver 127.0.0.1
search eu-west-1.compute.internal
My domain resolves fine with #127.0.0.1 and not resolves with the parent name server in resolv.dnsmasq. So is it resolving from cache..? In that case I have max-cache-ttl as 5 seconds so does it maintain the cache if the parent name server provides negative response..?
The domain I am trying to dig ends with rds.amazonaws.com
Thanks in advance.

Configuring ha-proxy for "war" file in jetty

I am new to Ha-proxy and stuck in a situation.
I have configured ha-proxy for two server 10.x.y.10 and 10.x.y.20. These two run jetty.
Everything is working fine if one of the jetty is down. The request goes to second server and everything happens as expected.
PROBLEM : Suppose both jetty are running and if i remove "war" file from one jetty , the request does not goes to second server. It just gives error "Error 404 Not Found"
I know i have configured ha-proxy for jetty not for the war files but is there any way to redirect request if the war file is missing or the requested situation is not even possible.
Please point me to the right direction.
Thanks in advance.
This is my haproxy configuration.
HA PROXY CONFIGURATION
defaults
mode http
log global
option httplog
option logasap
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
frontend vs_http_80
bind *:9090
default_backend pool_http_80
backend pool_http_80
#balance options
balance roundrobin
#http options
mode http
option httpchk OPTIONS /
option forwardfor
option http-server-close
#monitoring service endpoints with healthchecks
server pool_member1 10.x.y.10:8080 // x and y are dummy variables
server pool_member2 10.x.y.20:8080
frontend vs_stats :8081
mode http
default_backend stats_backend
backend stats_backend
mode http
stats enable
stats uri /stats
stats realm Stats\ Page
stats auth serveruser:password
stats admin if TRUE
I finally found the solution. In case anybody comes across the same issue , please find the solution below.
The following link solved my problem
http://tecadmin.net/haproxy-acl-for-load-balancing-on-url-request/
Basically the following line entry in the frontend configuration did the trick.
acl is_blog url_beg /blog
use_backend tecadmin_blog if is_blog
default_backend tecadmin_website
ACL = Access Control list -> ACLs are used to test some condition and perform an action
If the precondition is satisfied then it redirects to backend server.
We can use mulitple acls and direct to muliple backend through same front end.
Next in the backend server configuration we need to add "check" in the end which monitures its health condition.
backend tecadmin_website
mode http
balance roundrobin # Load Balancing algorithm
option httpchk
option forwardfor
server WEB1 192.168.1.103:80 check
server WEB2 192.168.1.105:80 check
Here's the complete configuration for my problem.
defaults
mode http
log global
option httplog
option logasap
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
frontend vs_http_80
bind *:9090
acl x1_app path_dir x1
acl x2_app path_dir x2
acl x1_avail nbsrv(backend_x1) ge 1
acl x2_avail nbsrv(backend_x2) ge 1
use_backend backend_x1 if x1_app1 x1_avail
use_backend backend_x2 if x2_app x2_avail
backend backend_x1
#balance options
balance roundrobin
#http options
mode http
option httpchk GET /x1
option forwardfor
option http-server-close
#monitoring service endpoints with healthchecks
server pool_member1 10.x.y.143:8080/x1 check
server pool_member2 10.x.y.141:8080/x2 check
backend backend_x2
#balance options
balance roundrobin
#http options
mode http
option httpchk GET /x2
option forwardfor
option http-server-close
#monitoring service endpoints with healthchecks
server pool_member1 10.x.y.143:8080/x2 check
server pool_member2 10.x.y6.141:8080/x2 check
frontend vs_stats :8081
mode http
default_backend stats_backend
backend stats_backend
mode http
stats enable
stats uri /stats
stats realm Stats\ Page
stats auth serveruser:password
stats admin if TRUE