AWS Amplify rewrite on SPA with Nginx proxy_pass - regex

We are setting up multiple Gatsby sites on AWS Amplify under one single domain. Using nginx proxy_pass we are able to use legacy and future stacks.
www.mysite.co.uk --- legacy
-- /section-1 --- legacy
-- /section-1/news --- gatsby
-- /section-2 --- legacy
-- /section-3 --- gatsby
Generally we have this working apart from one problem, when the user navigates to sub-directory on the Gatsby sites and refreshes the page the URL resolves back to the root of the section.
Nginx
location /section-1/news {
rewrite ^\/section-1/news\/(.*) /$1 break;
proxy_ssl_server_name on;
proxy_pass https://prod.sdsdasdasdssse.amplifyapp.com/;
proxy_redirect off;
}
AWS Amplify
[{
"source": "</^[^.]+$|\.(?!(css|gif|ico|jpg|js|png|txt|svg|woff|ttf|map|json|webp)$)([^.]+$)/>",
"status": "200",
"target": "index.html",
"condition": null
}]
If we add in an additional rewrite it all works fine but we need this to capture all sub-directories.
[{
"source": "/news",
"status": "200",
"target": "/news/",
"condition": null
}]

Related

How can I make apigateway forward root path to integrated http endpoint?

I created a REST api gateway in AWS and configure it to pass through all requests to a http endpoint. The configuration I have is
After deploy to a stage (dev) it gives me an invoke URL, like https://xxxx.execute-api.ap-southeast-2.amazonaws.com/dev,
it works fine if I invoke the url by appending a sub path like: https://xxxx.execute-api.ap-southeast-2.amazonaws.com/dev/xxxxx`, I can see it forward the request to downstream http endpoint. However it doesn't forward any request if I invoke the base url: https://xxxx.execute-api.ap-southeast-2.amazonaws.com/dev. How can I make it work with the base invoke url without any subpath?
I tired to add an additional / path resource in API gateway but it doesn't allow me to add it.
The application must be able to receive requests at any path, including the root path: /. An API Gateway resource with a path of /{proxy+} captures every path except the root path. Making a request for the root path results in a 403 response from API Gateway with the message Missing Authentication Token.
To fix this omission, add an additional resource to the API with the path set to / and link that new resource to the same http endpoint as used in the existing /{proxy+} resource.
The updated OpenAPI document now looks like the following code example:
{
"openapi": "3.0",
"info": {
"title": "All-capturing example",
"version": "1.0"
},
"paths": {
"/": {
"x-amazon-apigateway-any-method": {
"responses": {},
"x-amazon-apigateway-integration": {
"httpMethod": "POST",
"type": "aws_proxy",
"uri": ""
}
}
},
"/{proxy+}": {
"x-amazon-apigateway-any-method": {
"responses": {},
"x-amazon-apigateway-integration": {
"httpMethod": "POST",
"type": "aws_proxy",
"uri": ""
}
}
}
}
}

Access Django admin from Firebase

I have a website which has a React frontend hosted on Firebase and a Django backend which is hosted on Google Cloud Run. I have a Firebase rewrite rule which points all my API calls to the Cloud Run instance. However, I am unable to use the Django admin panel from my custom domain which points to Firebase.
I have tried two different versions of rewrite rules -
"rewrites": [
{
"source": "/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "**",
"destination": "/index.html"
}
]
--- AND ---
"rewrites": [
{
"source": "/api/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "/admin/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "**",
"destination": "/index.html"
}
]
I am able to see the log in page when I go to url.com/admin/, however I am unable to go any further. It just refreshes the page with empty email/password fields and no error message. Just as an FYI, it is not to do with my username and password as I have tested the admin panel and it works fine when accessing it directly using the Cloud Run url.
Any help will be much appreciated.
I didn't actually find an answer to why the admin login page was just refreshing when I was trying to log in using the Firebase rewrite rule, however I thought of an alternative way to access the admin panel using my custom domain.
I have added a custom domain to the Cloud Run instance so that is uses a subdomain of my site domain and I can access the admin panel by using admin.customUrl.com rather than customUrl.com/admin/.

Nginx internal dns resolve issue

I have nginx container in AWS that does reverse proxy for my website e.g. https://example.com. I have backend services that automatically register in local DNS - aws.local (this is done by AWS ECS Auto-Discovery).
The problem I have is that nginx is only resolving name to IP during start, so when service container is rebooted and gets new IP, nginx still tries old IP and I have "502 Bad Gateway" error.
Here is a code that I am running:
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
include /etc/nginx/mime.types;
log_format graylog2_json '{ "timestamp": "$time_iso8601", '
'"remote_addr": "$remote_addr", '
'"body_bytes_sent": $body_bytes_sent, '
'"request_time": $request_time, '
'"response_status": $status, '
'"request": "$request", '
'"request_method": "$request_method", '
'"host": "$host",'
'"upstream_cache_status": "$upstream_cache_status",'
'"upstream_addr": "$upstream_addr",'
'"http_x_forwarded_for": "$http_x_forwarded_for",'
'"http_referrer": "$http_referer", '
'"http_user_agent": "$http_user_agent" }';
upstream service1 {
server service1.aws.local:8070;
}
upstream service2 {
server service2.aws.local:8080;
}
resolver 10.0.0.2 valid=10s;
server {
listen 443 http2 ssl;
server_name example.com;
location /main {
proxy_pass http://service1;
}
location /auth {
proxy_pass http://service2;
}
I find advices to change nginx config to resolve names per request, but then I see my browser tries to open "service2.aws.local:8070" and fails since its AWS local DNS name. I should see https://example.com/auth" on my browser.
server {
set $main service1.aws.local:2000;
set $auth service2.aws.local:8070;
location /main {
proxy_http_version 1.1;
proxy_pass http://$main;
}
location /auth {
proxy_http_version 1.1;
proxy_pass http://$auth;
}
Can you help me fixing it?
Thanks !!!
TL;DR
resolver 169.254.169.253;
set $upstream "service1.aws.local";
proxy_pass http://$upstream:8070;
Just like with ECS, I experienced the same issue when using Docker Compose.
According to six8's comment on GitHub
nginx only resolves hostnames on startup. You can use variables with
proxy_pass to get it to use the resolver for runtime lookups.
See:
https://forum.nginx.org/read.php?2,215830,215832#msg-215832
https://www.ruby-forum.com/topic/4407628
It's quite annoying.
One of the links above provides an example
resolver 127.0.0.1;
set $backend "foo.example.com";
proxy_pass http://$backend;
The resolver part is necessary. And we can't refer to the defined upstreams here.
According to Ivan Frolov's answer on StackExchange, the resolver's address should be set to 169.254.169.253
What is the TTL for your CloudMap Service Discovery records? If you do an NS lookup from the NGINX container (assuming EC2 mode and you can exec into the container) does it return the new record? Without more information, it's hard to say, but I'd venture to say this is a TTL issue and not an NGINX/Service Discovery problem.
Lower the TTL to 1 second and see if that works.
AWS CloudMap API Reference DNS Record
I found perfectly solution of this issue.
Nginx "proxy_pass" can't use "etc/hosts" information.
I wanna sugguest you use HA-Proxy reverse proxy in ECS.
I tried nginx reverse proxy, but failed. And success with HA-Proxy.
It is more simple than nginx configuration.
First, use "links" option of Docker and setting "environment variables" (eg. LINK_APP, LINK_PORT).
Second, fill this "environment variables" into haproxy.cfg.
Also, I recommend you use "dynamic port mapping" to ALB. it makes more flexible works.
taskdef.json :
# taskdef.json
{
"executionRoleArn": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<APP_NAME>_ecsTaskExecutionRole",
"containerDefinitions": [
{
"name": "<APP_NAME>-rp",
"image": "gnokoheat/ecs-reverse-proxy:latest",
"essential": true,
"memoryReservation": <MEMORY_RESV>,
"portMappings": [
{
"hostPort": 0,
"containerPort": 80,
"protocol": "tcp"
}
],
"links": [
"<APP_NAME>"
],
"environment": [
{
"name": "LINK_PORT",
"value": "<SERVICE_PORT>"
},
{
"name": "LINK_APP",
"value": "<APP_NAME>"
}
]
},
{
"name": "<APP_NAME>",
"image": "<IMAGE_NAME>",
"essential": true,
"memoryReservation": <MEMORY_RESV>,
"portMappings": [
{
"protocol": "tcp",
"containerPort": <SERVICE_PORT>
}
],
"environment": [
{
"name": "PORT",
"value": "<SERVICE_PORT>"
},
{
"name": "APP_NAME",
"value": "<APP_NAME>"
}
]
}
],
"requiresCompatibilities": [
"EC2"
],
"networkMode": "bridge",
"family": "<APP_NAME>"
}
haproxy.cfg :
# haproxy.cfg
global
daemon
pidfile /var/run/haproxy.pid
defaults
log global
mode http
retries 3
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http
bind *:80
http-request set-header X-Forwarded-Host %[req.hdr(Host)]
compression algo gzip
compression type text/css text/javascript text/plain application/json application/xml
default_backend app
backend app
server static "${LINK_APP}":"${LINK_PORT}"
Dockerfile(haproxy) :
FROM haproxy:1.7
USER root
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
See :
Github : https://github.com/gnokoheat/ecs-reverse-proxy
Docker image : gnokoheat/ecs-reverse-proxy:latest

Any way to configure what signature version a Minio server accepts?

I have a Minio server set up and everything appears to be running normally.
For my CLI, I have this in my config.json:
"myalias": {
"url": "https://myurl",
"accessKey": "myaccesskey",
"secretKey": "mysecretkey",
"api": "S3v4",
"lookup": "auto",
"Region": "us-east-1"
}
But when I try to upload a file, I get this:
# mc cp test.txt myalias/stuff/
0 B / 19 B [ ] 0.00%
mc: <ERROR> Failed to copy `test.txt`. The request signature we
calculated does not match the signature you provided. Check your key and
signing method.
If I change my api in config.json to this:
"api": "S3v2"
It works:
# mc cp test.txt myalias/stuff/
test.txt: 19 B / 19 B [==============================] 100.00% 193 B/s 0s
My question is, can I configure Minio to use version 4 signature verification instead of version 2? Isn't minio supposed to use version 4 by default?
Turns out it was a problem with NGINX that our IT guys had set up. The problem and solution are outlined in these links:
https://github.com/minio/minio/issues/5298
https://docs.minio.io/docs/setup-nginx-proxy-with-minio
tl;dr:
After hours of research, I realized that I missed the Host directive on both reverse proxy configurations I had set.
For completeness, I missed those ones:
Nginx
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://minio;
}
Caddyfile
proxy / localhost:9898 {
transparent
}
Can you post the version of your minio and mc? Minio should support both s3v4 and s3v2. Also is there anything different about your access key and secret key?

Enabling CORS on Google App Engine for a Django Application

I have been trying to enable CORS headers on Google app engine but none of the methods that I found over the internet worked for me.
My application is on Python/Django and I want my frontend application (which is hosted separately) to be able to make API calls to my backend platform on Google App Engine.
The January 2017 release notes say that
We are changing the behavior of the Extensible Service Proxy (ESP) to deny cross-origin resource sharing (CORS) requests by default
It can be seenhere
And the solution to enable CORS given by them is to add the following snippet to the service's OpenAPI configuration.
"host": "echo-api.endpoints.YOUR_PROJECT_ID.cloud.goog",
"x-google-endpoints": [
{
"name": "echo-api.endpoints.YOUR_PROJECT_ID.cloud.goog",
"allowCors": "true"
}
],
...
So I followed this example and created two files in my code base
openapi.yml :
swagger: "2.0"
info:
description: "Google Cloud Endpoints APIs"
title: "APIs"
version: "1.0.0"
host: "echo-api.endpoints.<PROJECT-ID>.cloud.goog"
x-google-endpoints:
- name: "echo-api.endpoints.<PROJECT-ID>.cloud.goog"
allowCors: "true"
paths:
"/api/v1/sign-up":
post:
description: "Sends an email for verfication"
operationId: "signup"
produces:
- "application/json"
responses:
200:
description: "OK"
parameters:
- description: "Email address of the user"
in: body
name: email
required: true
schema:
type: string
- description: "password1"
in: body
name: password1
required: true
schema:
type: string
- description: "password2"
in: body
name: password2
required: true
schema:
type: string
openapi-appengine.yml:
swagger: "2.0"
info:
description: "Google Cloud Endpoints API fo localinsights backend server"
title: "Localinsights APIs"
version: "1.0.0"
host: "<PROJECT-ID>.appspot.com"
Then I ran this command:
gcloud service-management deploy openapi.yml
Then I edited my app.yml file to make it look like this (The addition was endpoints_api_service. Before adding this, the app was getting deployed without any errors):
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT myapp.wsgi
beta_settings:
cloud_sql_instances: <cloud instance>
runtime_config:
python_version: 3
automatic_scaling:
min_num_instances: 1
max_num_instances: 1
resources:
cpu: 1
memory_gb: 0.90
disk_size_gb: 10
env_variables:
DJANGO_SETTINGS_MODULE: myapp.settings.staging
DATABASE_URL: <dj-database-url>
endpoints_api_service:
name: "<PROJECT-ID>.appspot.com"
config_id: "<CONFIG-ID>"
Then I just deployed the application with
gcloud app deploy
Now, the app got deployed successfully but it is behaving strangely. All the requests which are supposed to return a 200 response still throw CORS error but the ones which return a 400 status do work.
For example - The sign up API expects these fields - email, password1, password2 where password1 should be same as password2. Now when I send correct parameters, I get HTTP 502 saying
No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin {origin-url} is therefore not allowed access. The response had HTTP status code 502
But when I send password1 not same as password2, I get HTTP 400 response which I am sure is coming from my code because the response is a dictionary written in the code if password1 and password2 do not match. Also in this case, the headers have Access-Control-Allow-Origin as * but in the former case, that was not true
I also checked my nginx error logs and it says
*27462 upstream prematurely closed connection while reading response header
What am I doing wrong here?
Is this the right way to enable CORS in GAE?
After banging my head for several days, I was able to figure out the the real problem. My database server was denying any connection to the webapp server.
Since in case of a HTTP 200 response, the webapp is supposed to make a database call, the webapp was trying to connect to the database server. This connection was taking too long and as soon as it reached beyond the NGINX's timeout time, NGINX used to send a response to the web browser with the status code as 502.
Since the 'access-control-allow-origin' header was being set from the webapp, NGINX did not set that header in its response. Hence the browser was interpreting it as a CORS denial.
As soon as I whitelisted my webapp's instance's IP address for the database server, things started running smoothly
Summary:
There is no need of openapi.yml file to enable CORS for a Django application on GAE flexible environment
Do not miss to check the NGINX logs :p
Update:
Just wanted to update my answer to specify the way through which you won't have to add you instance's IP to the whitelisted IP(s) of the SQL instance
Configure the DATABASES like this:
DATABASES = {
'HOST': <your-cloudsql-connection-string>, # This is the tricky part
'ENGINE': <db-backend>,
'NAME': <db-name>,
'USER': <user>,
'PASSWORD': <password>
}
Note the HOST key in the databases. GAE has a way through which you won't have to whitelist your instance's IP but for that to work, the host should be the cloudsql-connection-string and NOT the IP of the SQL instance.
If you are not sure what's your cloudsql-connection-string, go to the Google cloud platform dashboard and select the SQL tab under the Storage section. You should see a table with a column Instance connection name. The value under this column is your cloudsql-connection-string.
Nginx as your reverse proxy, so, as the gateway to your server, should be who manage CORS against client browser requests, as first contact from beyond to your system. Should not be any of the backend servers (neither your database, neither anything).
Here you got my default config to enable CORS in nginx from Ajax calls to a REST service of my own (backserver glassfish). Feel free to check and use it and hope it serves to you.
server {
listen 80; ## listen for ipv4; this line is default and implied
server_name codevault;
#Glassfish
location /GameFactoryService/ {
index index.html;
add_header Access-Control-Allow-Origin $http_origin;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-NginX-Proxy true;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:18000/GameFactoryService/;
}
#static content
location / {
root /usr/share/nginx_static_content;
}
error_page 500 501 502 503 504 505 506 507 508 509 510 511 /50x.html;
#error
location = /50x.html {
add_header Access-Control-Allow-Origin $http_origin;
internal;
}
}