I have an opensearch instance which is in a VPC behind an nginx proxy
I cannot see the tenantes in Opensearch, I can create them but not see them. And when I want to change the tenante he tells me “Failed to switch tenant. Invalid cookie”
are there people who have encountered the same problems. Thank you
here is my configuration : I took the aws configuration
enter image description here
server {
listen 443;
server_name $host;
rewrite ^/$ https://$host/_dashboards redirect;
ssl_certificate /etc/nginx/cert.crt;
ssl_certificate_key /etc/nginx/cert.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location /_dashboards {
# Forward requests to Dashboards
proxy_pass https://$domain-endpoint/_dashboards;
# Handle redirects to Cognito
proxy_redirect https://$cognito_host https://$host;
# Update cookie domain and path
proxy_cookie_domain $domain-endpoint $host;
proxy_cookie_path / /_dashboards/;
# Response buffer settings
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
location ~ \/(log|sign|fav|forgot|change|saml|oauth2) {
# Forward requests to Cognito
proxy_pass https://$cognito_host;
# Handle redirects to Dashboards
proxy_redirect https://$domain-endpoint https://$host;
# Update cookie domain
proxy_cookie_domain $cognito_host $host;
}
}
The line
proxy_cookie_path / /_dashboards/;
Should be
proxy_cookie_path ~/ /_dashboards/;
Note the ~. Encountered the same issue and this fixed it.
Related
I am trying to build a speech recognition-based application. It runs on Django with Django-channels and Daphne, and Nginx as the web server, on an Ubuntu EC2 instance on AWS. It should run in the browser, so I am using WebRTC to get the audio stream – or at least that’s the goal. I'll call my domain mysite.co here.
Django serves the page properly on http://www.mysite.co:8000 and Daphne seems to run too, the logs show
2022-10-17 13:05:02,950 INFO Starting server at fd:fileno=0, unix:/run/daphne/daphne0.sock
2022-10-17 13:05:02,951 INFO HTTP/2 support enabled
2022-10-17 13:05:02,951 INFO Configuring endpoint fd:fileno=0
2022-10-17 13:05:02,965 INFO Listening on TCP address [Private IPv4 address of my EC2 instance]:8000
2022-10-17 13:05:02,965 INFO Configuring endpoint unix:/run/daphne/daphne0.sock
I used the Daphne docs to set up Daphne with supervisor. There, they use port 8000.
My first Nginx config file nginx.conf (I shouldn't use that one, should I?) looks like this:
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
types_hash_max_size 2048;
# server_tokens off;
server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Gzip Settings
gzip on;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
upstream channels-backend {
server mysite.co:80;
}
server {
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://mysite.co;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}}
}
# and the mail settings, but I don't use them
Currently, the homepage of my server just serves a HTML that I set in my first Nginx server block (I set this up while figuring out how to get TLS on Nginx, I don't need the HTML here):
server {
root /var/www/mysite/html;
index index.html index.htm index.nginx-debian.html;
server_name mysite.co www.mysite.co;
location / {
try_files $uri $uri/ =404;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mysite.co/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.co/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.mysite.co) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = mysite.co) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name mysite.co www.mysite.co;
return 404; # managed by Certbot
}
I need WebRTC to access the audio stream that should run through Daphne, but for that, I need HTTPS because you can’t access user media via unencrypted protocols. I have created a TLS cert with Let’s Encrypt for Nginx (cf. above), but of course this only works on port 443. I can’t (and probably shouldn’t be able to?) reach port 8000 via HTTPS.
I am a bit lost at this point, my Nginx experience is very limited. Do I need to bind port 8000 to 443? If so, what do I need to do with my Nginx config for the HTML file that is currently served there? Am I on the right track at all?
If I should share other config files from Nginx or supervisor, please let me know.
I was on the wrong track, actually it's very straightforward. There's no need to run it on port 8000, you can run it conveniently on 443.
You don't configure the SSL in the Nginx server blocks, but you do it right in the place where you start the Daphne server adding -e ssl:443:privateKey=key.pem:certKey=crt.pem to your daphne command. You must have generated an SSL certificate previously of course, Let'sEncrypt works just fine here as well. privateKey is privkey.pem and certKey is fullchain.pem then.
(This snippet in itself won't work, depending on your needs you might have to add other flags as well like -u or --endpoint.)
Im trying to set up a reverse wss proxy with nginx to an amazon api gateaway websocket api but I have had no luck with the configuration of nginx so i would be glad if you helped me sort this out.
Let me give you some details:
I have an EC2 instance running nginx that has attached to it an elastic ip address.
I also have dns records to point traffic from connect.example.com to that ip address.
I have set up nginx as a reverse proxy to proxy the traffic from connect.example.com to app.example.com on port 443 with ssl(I have generated the relevant certificates).
On app.example.com lies a websockets api on amazon's api gateaway service.
I can see from nginx's access logs that my requests reach the ec2 instance but I always get error responses no matter how i change the nginx config file(400,403,500,502 etc).
I dont seem to understand where the problem lies even though I have searched around and tried various configurations.
Im attaching my config files below for reference:
nginx.conf
# Based on https://www.nginx.com/resources/wiki/start/topics/examples/full/#nginx-conf
user daemon daemon;
worker_processes auto;
error_log "/opt/bitnami/nginx/logs/error.log";
pid "/opt/bitnami/nginx/tmp/nginx.pid";
events {
worker_connections 1024;
}
http {
#include mime.types;
#default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log "/opt/bitnami/nginx/logs/access.log";
#add_header X-Frame-Options SAMEORIGIN;
client_body_temp_path "/opt/bitnami/nginx/tmp/client_body" 1 2;
proxy_temp_path "/opt/bitnami/nginx/tmp/proxy" 1 2;
fastcgi_temp_path "/opt/bitnami/nginx/tmp/fastcgi" 1 2;
scgi_temp_path "/opt/bitnami/nginx/tmp/scgi" 1 2;
uwsgi_temp_path "/opt/bitnami/nginx/tmp/uwsgi" 1 2;
#connection_pool_size 112;
#sendfile on;
#tcp_nopush on;
#tcp_nodelay on;
#gzip on;
#gzip_http_version 1.0;
#gzip_comp_level 2;
#gzip_proxied any;
#gzip_types text/plain text/css application/javascript text/xml application/xml+rss;
#keepalive_timeout 65;
#ssl_protocols TLSv1.2 TLSv1.3;
#ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 80M;
#server_tokens on;
#include "/opt/bitnami/nginx/conf/server_blocks/*.conf";
# HTTP Server
#server {
# Port to listen on, can also be set in IP:PORT format
# listen 80;
# include "/opt/bitnami/nginx/conf/bitnami/*.conf";
# include "/opt/bitnami/nginx/conf/ssl/ssl-redirect.conf";
# location /status {
# stub_status on;
# access_log off;
# allow 127.0.0.1;
# deny all;
# }
# }
include "/opt/bitnami/nginx/conf/ssl/ssl.conf";
}
ssl.conf
resolver app.example.com;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 443 ssl;
#listen [::]:443 ssl;
server_name connect.example.com;
#ssl on;
ssl_certificate /opt/bitnami/nginx/conf/bitnami/certs/server.crt;
ssl_certificate_key /opt/bitnami/nginx/conf/bitnami/certs/server.key;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
root /usr/share/nginx/html;
underscores_in_headers on;
location / {
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_pass https://ws-backend$uri$is_args$args;
proxy_read_timeout 9000;
proxy_pass_request_headers on;
#Websocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Sec-WebSocket-Protocol $http_sec_websocket_protocol;
proxy_set_header Sec-WebSocket-Extensions $http_sec_websocket_extensions;
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
proxy_set_header Sec-WebSocket-Accept $http_sec_websocket_accept;
}
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
upstream ws-backend {
server app.example.com:443;
}
When i connect directly to app.example.com i have no problem and the response is the following:
expected response
But when i connect though connect.example.com i get the following response:
actual response
I use Elasticsearch VPC-based, for connect to kibana I use nginx reverse proxy.
I'm followed this : https://aws.amazon.com/premiumsupport/knowledge-center/kibana-outside-vpc-nginx-elasticsearch/?nc1=h_ls.
When I try to access to https://ec2-x-x-x-x.region-x.compute.amazonaws.com (EC2 instance containts nginx ).
I have a redirect to https://ec2-x-x-x-x.region-x.compute.amazonaws.com/login?response_type=code&client_id=xxxx... instead https://auth.website.com/login?response_type=code&client_id=xxxx... (auth.website.com is Cognito host)
Then I have an 502 bad gateway.
My nginx config :
server {
listen 443;
server_name $host;
rewrite ^/$ https://$host/_plugin/kibana redirect;
ssl_certificate /etc/nginx/cert.crt;
ssl_certificate_key /etc/nginx/cert.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location /_plugin/kibana {
# Forward requests to Kibana
proxy_pass https://vpc-domain-xxxxx.region.es.amazonaws.com/_plugin/kibana;
# Handle redirects to Amazon Cognito
proxy_redirect https://auth.exmample.com https://$host;
# Update cookie domain and path
proxy_cookie_domain vpc-domain-xxxxx.region.es.amazonaws.com $host;
proxy_cookie_path / /_plugin/kibana/;
# Response buffer settings
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
location ~ \/(log|sign|error|fav|forgot|change|saml|oauth2) {
# Forward requests to Cognito
proxy_pass https://auth.exmample.com;
# Handle redirects to Kibana
proxy_redirect https://vpc-domain-xxxxx.region.es.amazonaws.com https://$host;
# Update cookie domain
proxy_cookie_domain auth.exmample.com $host;
}
}
Thank you
Relaunch the page with browser Developer Tools enabled and "Network" tab is selected. You might able to start the investigation on the cause from here.
access to your EC2 instance, then check the nginx log which located at /var/log/nginx/ directory (for linux based distribution).
Check the security group of your EC2 instance.
I need to know if it is possible to use Nginx as a reverse proxy to serve several web apps hosted each one in a different Raspberry Pi.
As it can be seen in the diagram, the Raspberries will be all connected to an unmanaged switch, the first switch I intend to install nginx so it could serve as reverse proxy depending on the website requested from the internet. Ex: wwww.site1.com, www.site2.www, etc
Is this possible?
Will I be able to access those RPis from a computer connected to the modem, not to the switch?
Note: The modem is a wifi modem and the switch is an unmanaged wired switch.
Apologies for my poor drawing skills, and thanks for any help. I need to know if this idea is possible before buying all this stuff.
I think, it is possible, but there are some requrements:
static external IP assigned to Modem;
static IP's on RPi's;
correct forwarding rules on modem.
I mean, you need forward all requests like the following:
modem:80 -> rp0:80
modem:443 -> rp0:443
On rp0 ports may differ from 80 and 443, so, please, set up correct rules and note it in nginx config.
After that set up upstreams or use IP's of rp1-3 in websites configs:
upstream rp1 {
server 192.168.1.11:port;
}
upstream rp2 {
server 192.168.1.12:port;
}
upstream rp3 {
server 192.168.1.13:port;
}
Replace port with port, which is listened on apropriate RPi.
Website configs will be like the following:
server {
server_name site1.com www.site1.com ;
location / { proxy_pass http://rp1 ; }
}
server {
server_name site2.com www.site2.com ;
location / { proxy_pass http://rp2 ; }
}
Add any params you need.
Also, if you are going to host some static websites, the best way is too place them on rp0.
EDIT 1
Example of working config:
server {
listen 80;
server_name site1.com www.site1.com ;
location / { rewrite ^ https://$host$request_uri permanent;}
}
server {
listen 443 ssl;
server_name site1.com www.site1.com;
ssl_certificate /etc/letsencrypt/live/site1/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/site1/key.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://rp1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $remote_addr;
port_in_redirect off;
proxy_redirect http://rp1/ /;
}
Please, note, if you are going to use Letsencrypt, the best way is to set up certbot (or smth else) on rp0. It will be easier to renew certs automatically. Also, use /etc/letsencrypt/live/site1/fullchain.pem .
In order to use multiple SSL-domains, be sure that install nginx supports SNI:
# nginx -V
nginx version: nginx/1.14.0
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC)
built with OpenSSL 1.0.2k-fips 26 Jan 2017
TLS SNI support enabled
This is the nginx conf on the side of the head node:
server {
listen 80;
server_name www.codingindfw.com codingindfw.com;
location / { rewrite ^ https://$host$request_uri permanent;}
}
server{
listen 443 ssl;
server_name www.codingindfw.www codingindfw.com;
client_max_body_size 4G;
ssl_certificate /etc/letsencrypt/live/www.koohack.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.koohack.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
proxy_pass http://192.168.0.8;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $remote_addr;
port_in_redirect off;
proxy_redirect http://192.168.0.8/ /;
}
}
And this is the nginx conf file on the client running the actual Django app:
server {
listen 80 default_server;
server_name www.codingindfw.com;
client_max_body_size 4G;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/pi/coding-in-dfw;
}
location /media/ {
root /home/pi/coding-in-dfw;
}
location / {
include proxy_params;
proxy_pass http://unix:/home/pi/coding-in-dfw/mysocket.sock;
}
}
I've come up with a small issue, we're using a load balancer for a new project, but we cannot force the www. without having a redirect loop between requests.
We're currently using NGINX, and the snippet to redirect is the following:
LOAD BALANCER NGINX CONFIG
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/before/*;
# FORGE CONFIG (DOT NOT REMOVE!)
include upstreams/mywebsite.com;
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name .mywebsite.com;
if ($host !~* ^www\.){
rewrite ^(.*)$ https://www.mywebsite.com$1;
}
# FORGE SSL (DO NOT REMOVE!)
ssl_certificate /etc/nginx/ssl/mywebsite.com/225451/server.crt;
ssl_certificate_key /etc/nginx/ssl/mywebsite.com/225451/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
charset utf-8;
access_log off;
error_log /var/log/nginx/mywebsite.com-error.log error;
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/server/*;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://370308_app/;
proxy_redirect off;
# Handle Web Socket Connections
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/after/*;
HTTP SERVER NGINX CONFIG
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/before/*;
server {
listen 80;
listen [::]:80;
server_name .mywebsite.com;
root /home/forge/mywebsite.com/public;
if ($host !~* ^www\.){
rewrite ^(.*)$ https://www.mywebsite.com$1;
}
# FORGE SSL (DO NOT REMOVE!)
# ssl_certificate;
# ssl_certificate_key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparams.pem;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/server/*;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/mywebsite.com-error.log error;
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/mywebsite.com/after/*;
Thing is, with this config I'm only getting redirect loops from the server.
Help please :D <3
After writing the prior general-purpose answer, I Googled "FORGE CONFIG (DOT NOT REMOVE!)", and this was the first result:
https://laracasts.com/discuss/channels/forge/forge-how-to-disable-nginx-default-redirection
inside nginx/forge-conf/be106.net/before/redirect.conf file there is this simple config:
…
server_name www.my-domain.net;
return 301 $scheme://my-domain.net$request_uri;
…
is there a simple way of removing this without altering the file itself(as it look like bad idea).
So, it appears that the redirect is being caused by the application you're using, so, we found the most likely cause of the loop!
In turn, the appropriate way to configure your application to avoid said loop would be outside of the score of StackOverflow.
However, as a workaround:
consider whether you actually need all those forge-conf include directives at the load-balancer level; subsequently, you could fake the appropriate domain to be passed to the backend that would not cause a redirect (provided you remove your own redundant redirects):
- proxy_set_header Host $http_host;
+ proxy_set_header Host example.com;
note that the reason the forge-conf/example.com/before/redirect.conf directive takes precedence over your own configuration for .example.com is the order of the directive — you could potentially move the /before/* include to be after your own configuration, if such a move would otherwise make sense.
I don't think the nginx snippets you provided would cause a redirect loop by themselves.
First, you have to figure out whether it's an actual redirect — very often in these questions, the 301 Moved Permanently response gets cached in your browser, and subsequently you see a cached version, instead of a fresh one.
Subsequently, you'd have to figure out what is causing the redirect loop:
Try adding unique strings to each redirect directive, to see which one would be causing the loop.
if ($host !~* ^www\.) {return 301 $scheme://www.$host/levelX$request_uri}
Ask yourself why do you have so many redirect directives in the first place — there doesn't seem to be much of a valid reason to have redirect directives both at the front-end load balancer, as well as the backend.
If the above doesn't resolve the issue, then you know that the redirect loop is not coming from the files you've provided, and you have to dig deeper — it's possible for it to come from some other files, perhaps one of your include directives, or perhaps a default server of www.example.com is defined elsewhere, which redirects to example.com, or perhaps the redirect is done at the application layer.