Any way to configure what signature version a Minio server accepts? - amazon-web-services

I have a Minio server set up and everything appears to be running normally.
For my CLI, I have this in my config.json:
"myalias": {
"url": "https://myurl",
"accessKey": "myaccesskey",
"secretKey": "mysecretkey",
"api": "S3v4",
"lookup": "auto",
"Region": "us-east-1"
}
But when I try to upload a file, I get this:
# mc cp test.txt myalias/stuff/
0 B / 19 B [ ] 0.00%
mc: <ERROR> Failed to copy `test.txt`. The request signature we
calculated does not match the signature you provided. Check your key and
signing method.
If I change my api in config.json to this:
"api": "S3v2"
It works:
# mc cp test.txt myalias/stuff/
test.txt: 19 B / 19 B [==============================] 100.00% 193 B/s 0s
My question is, can I configure Minio to use version 4 signature verification instead of version 2? Isn't minio supposed to use version 4 by default?

Turns out it was a problem with NGINX that our IT guys had set up. The problem and solution are outlined in these links:
https://github.com/minio/minio/issues/5298
https://docs.minio.io/docs/setup-nginx-proxy-with-minio
tl;dr:
After hours of research, I realized that I missed the Host directive on both reverse proxy configurations I had set.
For completeness, I missed those ones:
Nginx
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://minio;
}
Caddyfile
proxy / localhost:9898 {
transparent
}

Can you post the version of your minio and mc? Minio should support both s3v4 and s3v2. Also is there anything different about your access key and secret key?

Related

AWS Amplify rewrite on SPA with Nginx proxy_pass

We are setting up multiple Gatsby sites on AWS Amplify under one single domain. Using nginx proxy_pass we are able to use legacy and future stacks.
www.mysite.co.uk --- legacy
-- /section-1 --- legacy
-- /section-1/news --- gatsby
-- /section-2 --- legacy
-- /section-3 --- gatsby
Generally we have this working apart from one problem, when the user navigates to sub-directory on the Gatsby sites and refreshes the page the URL resolves back to the root of the section.
Nginx
location /section-1/news {
rewrite ^\/section-1/news\/(.*) /$1 break;
proxy_ssl_server_name on;
proxy_pass https://prod.sdsdasdasdssse.amplifyapp.com/;
proxy_redirect off;
}
AWS Amplify
[{
"source": "</^[^.]+$|\.(?!(css|gif|ico|jpg|js|png|txt|svg|woff|ttf|map|json|webp)$)([^.]+$)/>",
"status": "200",
"target": "index.html",
"condition": null
}]
If we add in an additional rewrite it all works fine but we need this to capture all sub-directories.
[{
"source": "/news",
"status": "200",
"target": "/news/",
"condition": null
}]

Nginx internal dns resolve issue

I have nginx container in AWS that does reverse proxy for my website e.g. https://example.com. I have backend services that automatically register in local DNS - aws.local (this is done by AWS ECS Auto-Discovery).
The problem I have is that nginx is only resolving name to IP during start, so when service container is rebooted and gets new IP, nginx still tries old IP and I have "502 Bad Gateway" error.
Here is a code that I am running:
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
include /etc/nginx/mime.types;
log_format graylog2_json '{ "timestamp": "$time_iso8601", '
'"remote_addr": "$remote_addr", '
'"body_bytes_sent": $body_bytes_sent, '
'"request_time": $request_time, '
'"response_status": $status, '
'"request": "$request", '
'"request_method": "$request_method", '
'"host": "$host",'
'"upstream_cache_status": "$upstream_cache_status",'
'"upstream_addr": "$upstream_addr",'
'"http_x_forwarded_for": "$http_x_forwarded_for",'
'"http_referrer": "$http_referer", '
'"http_user_agent": "$http_user_agent" }';
upstream service1 {
server service1.aws.local:8070;
}
upstream service2 {
server service2.aws.local:8080;
}
resolver 10.0.0.2 valid=10s;
server {
listen 443 http2 ssl;
server_name example.com;
location /main {
proxy_pass http://service1;
}
location /auth {
proxy_pass http://service2;
}
I find advices to change nginx config to resolve names per request, but then I see my browser tries to open "service2.aws.local:8070" and fails since its AWS local DNS name. I should see https://example.com/auth" on my browser.
server {
set $main service1.aws.local:2000;
set $auth service2.aws.local:8070;
location /main {
proxy_http_version 1.1;
proxy_pass http://$main;
}
location /auth {
proxy_http_version 1.1;
proxy_pass http://$auth;
}
Can you help me fixing it?
Thanks !!!
TL;DR
resolver 169.254.169.253;
set $upstream "service1.aws.local";
proxy_pass http://$upstream:8070;
Just like with ECS, I experienced the same issue when using Docker Compose.
According to six8's comment on GitHub
nginx only resolves hostnames on startup. You can use variables with
proxy_pass to get it to use the resolver for runtime lookups.
See:
https://forum.nginx.org/read.php?2,215830,215832#msg-215832
https://www.ruby-forum.com/topic/4407628
It's quite annoying.
One of the links above provides an example
resolver 127.0.0.1;
set $backend "foo.example.com";
proxy_pass http://$backend;
The resolver part is necessary. And we can't refer to the defined upstreams here.
According to Ivan Frolov's answer on StackExchange, the resolver's address should be set to 169.254.169.253
What is the TTL for your CloudMap Service Discovery records? If you do an NS lookup from the NGINX container (assuming EC2 mode and you can exec into the container) does it return the new record? Without more information, it's hard to say, but I'd venture to say this is a TTL issue and not an NGINX/Service Discovery problem.
Lower the TTL to 1 second and see if that works.
AWS CloudMap API Reference DNS Record
I found perfectly solution of this issue.
Nginx "proxy_pass" can't use "etc/hosts" information.
I wanna sugguest you use HA-Proxy reverse proxy in ECS.
I tried nginx reverse proxy, but failed. And success with HA-Proxy.
It is more simple than nginx configuration.
First, use "links" option of Docker and setting "environment variables" (eg. LINK_APP, LINK_PORT).
Second, fill this "environment variables" into haproxy.cfg.
Also, I recommend you use "dynamic port mapping" to ALB. it makes more flexible works.
taskdef.json :
# taskdef.json
{
"executionRoleArn": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<APP_NAME>_ecsTaskExecutionRole",
"containerDefinitions": [
{
"name": "<APP_NAME>-rp",
"image": "gnokoheat/ecs-reverse-proxy:latest",
"essential": true,
"memoryReservation": <MEMORY_RESV>,
"portMappings": [
{
"hostPort": 0,
"containerPort": 80,
"protocol": "tcp"
}
],
"links": [
"<APP_NAME>"
],
"environment": [
{
"name": "LINK_PORT",
"value": "<SERVICE_PORT>"
},
{
"name": "LINK_APP",
"value": "<APP_NAME>"
}
]
},
{
"name": "<APP_NAME>",
"image": "<IMAGE_NAME>",
"essential": true,
"memoryReservation": <MEMORY_RESV>,
"portMappings": [
{
"protocol": "tcp",
"containerPort": <SERVICE_PORT>
}
],
"environment": [
{
"name": "PORT",
"value": "<SERVICE_PORT>"
},
{
"name": "APP_NAME",
"value": "<APP_NAME>"
}
]
}
],
"requiresCompatibilities": [
"EC2"
],
"networkMode": "bridge",
"family": "<APP_NAME>"
}
haproxy.cfg :
# haproxy.cfg
global
daemon
pidfile /var/run/haproxy.pid
defaults
log global
mode http
retries 3
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http
bind *:80
http-request set-header X-Forwarded-Host %[req.hdr(Host)]
compression algo gzip
compression type text/css text/javascript text/plain application/json application/xml
default_backend app
backend app
server static "${LINK_APP}":"${LINK_PORT}"
Dockerfile(haproxy) :
FROM haproxy:1.7
USER root
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
See :
Github : https://github.com/gnokoheat/ecs-reverse-proxy
Docker image : gnokoheat/ecs-reverse-proxy:latest

detect www. using sed

I am new to sed and almost confused.
Here is what I have in the nginx folder of my project:
files:
"/tmp/45_nginx_https_rw.sh":
owner: root
group: root
mode: "000644"
content: |
#! /bin/bash
CONFIGURED=`grep -c "return 301 https" /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf`
if [ $CONFIGURED = 0 ]
then
sed -i '/listen 8080;/a \ if ($http_x_forwarded_proto = "http") { return 301 https://$host$request_uri; }\n' /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
logger -t nginx_rw "https rewrite rules added"
exit 0
else
logger -t nginx_rw "https rewrite rules already set"
exit 0
fi
The above code works like a charm and basically redirects all the request to https if they are http request.
However I need to add a piece to check if the url has www and redirect to non www.
so for example www.test.com will be redirected to test.com.
How can I achieve this?
The AWS recommended solution was to completely rewrite "/etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy.conf"
You can do this by adding this file in .ebextensions: https://github.com/awsdocs/elastic-beanstalk-samples/blob/master/configuration-files/aws-provided/security-configuration/https-redirect/docker-sc/https-redirect-docker-sc.config
I made one extra tweak for www redirect:
location / {
set $redirect 0;
if ($http_x_forwarded_proto != "https") {
set $redirect 1;
}
if ($host ~ ^www\.(?<domain>.+)$) {
set $redirect 1;
}
if ($http_user_agent ~* "ELB-HealthChecker") {
set $redirect 0;
}
if ($redirect = 1) {
return 301 https://ADDYOURDOMAINREDIRECTHERE$request_uri;
}
*One more note, your ec2 instance may already have a bunch of config files that you tried while finding a solution. I suggest deploying your app with NO config in .ebextensions, and then Rebuild Environment from the ELB console. This should give you a blank slate. Then simply add the file above to .ebextensions and redeploy.
*another note, you can ssh into your ec2 instance and verify that the file looks correct

Enabling CORS on Google App Engine for a Django Application

I have been trying to enable CORS headers on Google app engine but none of the methods that I found over the internet worked for me.
My application is on Python/Django and I want my frontend application (which is hosted separately) to be able to make API calls to my backend platform on Google App Engine.
The January 2017 release notes say that
We are changing the behavior of the Extensible Service Proxy (ESP) to deny cross-origin resource sharing (CORS) requests by default
It can be seenhere
And the solution to enable CORS given by them is to add the following snippet to the service's OpenAPI configuration.
"host": "echo-api.endpoints.YOUR_PROJECT_ID.cloud.goog",
"x-google-endpoints": [
{
"name": "echo-api.endpoints.YOUR_PROJECT_ID.cloud.goog",
"allowCors": "true"
}
],
...
So I followed this example and created two files in my code base
openapi.yml :
swagger: "2.0"
info:
description: "Google Cloud Endpoints APIs"
title: "APIs"
version: "1.0.0"
host: "echo-api.endpoints.<PROJECT-ID>.cloud.goog"
x-google-endpoints:
- name: "echo-api.endpoints.<PROJECT-ID>.cloud.goog"
allowCors: "true"
paths:
"/api/v1/sign-up":
post:
description: "Sends an email for verfication"
operationId: "signup"
produces:
- "application/json"
responses:
200:
description: "OK"
parameters:
- description: "Email address of the user"
in: body
name: email
required: true
schema:
type: string
- description: "password1"
in: body
name: password1
required: true
schema:
type: string
- description: "password2"
in: body
name: password2
required: true
schema:
type: string
openapi-appengine.yml:
swagger: "2.0"
info:
description: "Google Cloud Endpoints API fo localinsights backend server"
title: "Localinsights APIs"
version: "1.0.0"
host: "<PROJECT-ID>.appspot.com"
Then I ran this command:
gcloud service-management deploy openapi.yml
Then I edited my app.yml file to make it look like this (The addition was endpoints_api_service. Before adding this, the app was getting deployed without any errors):
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT myapp.wsgi
beta_settings:
cloud_sql_instances: <cloud instance>
runtime_config:
python_version: 3
automatic_scaling:
min_num_instances: 1
max_num_instances: 1
resources:
cpu: 1
memory_gb: 0.90
disk_size_gb: 10
env_variables:
DJANGO_SETTINGS_MODULE: myapp.settings.staging
DATABASE_URL: <dj-database-url>
endpoints_api_service:
name: "<PROJECT-ID>.appspot.com"
config_id: "<CONFIG-ID>"
Then I just deployed the application with
gcloud app deploy
Now, the app got deployed successfully but it is behaving strangely. All the requests which are supposed to return a 200 response still throw CORS error but the ones which return a 400 status do work.
For example - The sign up API expects these fields - email, password1, password2 where password1 should be same as password2. Now when I send correct parameters, I get HTTP 502 saying
No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin {origin-url} is therefore not allowed access. The response had HTTP status code 502
But when I send password1 not same as password2, I get HTTP 400 response which I am sure is coming from my code because the response is a dictionary written in the code if password1 and password2 do not match. Also in this case, the headers have Access-Control-Allow-Origin as * but in the former case, that was not true
I also checked my nginx error logs and it says
*27462 upstream prematurely closed connection while reading response header
What am I doing wrong here?
Is this the right way to enable CORS in GAE?
After banging my head for several days, I was able to figure out the the real problem. My database server was denying any connection to the webapp server.
Since in case of a HTTP 200 response, the webapp is supposed to make a database call, the webapp was trying to connect to the database server. This connection was taking too long and as soon as it reached beyond the NGINX's timeout time, NGINX used to send a response to the web browser with the status code as 502.
Since the 'access-control-allow-origin' header was being set from the webapp, NGINX did not set that header in its response. Hence the browser was interpreting it as a CORS denial.
As soon as I whitelisted my webapp's instance's IP address for the database server, things started running smoothly
Summary:
There is no need of openapi.yml file to enable CORS for a Django application on GAE flexible environment
Do not miss to check the NGINX logs :p
Update:
Just wanted to update my answer to specify the way through which you won't have to add you instance's IP to the whitelisted IP(s) of the SQL instance
Configure the DATABASES like this:
DATABASES = {
'HOST': <your-cloudsql-connection-string>, # This is the tricky part
'ENGINE': <db-backend>,
'NAME': <db-name>,
'USER': <user>,
'PASSWORD': <password>
}
Note the HOST key in the databases. GAE has a way through which you won't have to whitelist your instance's IP but for that to work, the host should be the cloudsql-connection-string and NOT the IP of the SQL instance.
If you are not sure what's your cloudsql-connection-string, go to the Google cloud platform dashboard and select the SQL tab under the Storage section. You should see a table with a column Instance connection name. The value under this column is your cloudsql-connection-string.
Nginx as your reverse proxy, so, as the gateway to your server, should be who manage CORS against client browser requests, as first contact from beyond to your system. Should not be any of the backend servers (neither your database, neither anything).
Here you got my default config to enable CORS in nginx from Ajax calls to a REST service of my own (backserver glassfish). Feel free to check and use it and hope it serves to you.
server {
listen 80; ## listen for ipv4; this line is default and implied
server_name codevault;
#Glassfish
location /GameFactoryService/ {
index index.html;
add_header Access-Control-Allow-Origin $http_origin;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-NginX-Proxy true;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:18000/GameFactoryService/;
}
#static content
location / {
root /usr/share/nginx_static_content;
}
error_page 500 501 502 503 504 505 506 507 508 509 510 511 /50x.html;
#error
location = /50x.html {
add_header Access-Control-Allow-Origin $http_origin;
internal;
}
}

AWS CloudFront: Font from origin has been blocked from loading by Cross-Origin Resource Sharing policy

I'm receiving the following error on a couple of Chrome browsers but not all. Not sure entirely what the issue is at this point.
Font from origin https://ABCDEFG.cloudfront.net has been blocked from loading by Cross-Origin Resource Sharing policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin https://sub.domain.example is therefore not allowed access.
I have the following CORS Configuration on S3
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedHeader>*</AllowedHeader>
<AllowedMethod>GET</AllowedMethod>
</CORSRule>
</CORSConfiguration>
The request
Remote Address:1.2.3.4:443
Request URL:https://abcdefg.cloudfront.net/folder/path/icons-f10eba064933db447695cf85b06f7df3.woff
Request Method:GET
Status Code:200 OK
Request Headers
Accept:*/*
Accept-Encoding:gzip,deflate
Accept-Language:en-US,en;q=0.8
Cache-Control:no-cache
Connection:keep-alive
Host:abcdefg.cloudfront.net
Origin:https://sub.domain.example
Pragma:no-cache
Referer:https://abcdefg.cloudfront.net/folder/path/icons-e283e9c896b17f5fb5717f7c9f6b05eb.css
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.94 Safari/537.36
All other requests from Cloudfront/S3 work properly, including JS files.
Add this rule to your .htaccess
Header add Access-Control-Allow-Origin "*"
even better, as suggested by #david thomas, you can use a specific domain value, e.g.
Header add Access-Control-Allow-Origin "your-domain.example"
Chrome since ~Sep/Oct 2014 makes fonts subject to the same CORS checks as Firefox has done https://code.google.com/p/chromium/issues/detail?id=286681. There is a discussion on this in https://groups.google.com/a/chromium.org/forum/?fromgroups=#!topic/blink-dev/TT9D5-Zfnzw
Given that for fonts the browser may do a preflight check, then your S3 policy needs the cors request header as well. You can check your page in say Safari (which at present doesn't do CORS checking for fonts) and Firefox (that does) to double check this is the problem described.
See Stack overflow answer on Amazon S3 CORS (Cross-Origin Resource Sharing) and Firefox cross-domain font loading for the Amazon S3 CORS details.
NB in general because this used to apply to Firefox only, so it may help to search for Firefox rather than Chrome.
I was able to solve the problem by simply adding <AllowedMethod>HEAD</AllowedMethod> to the CORS policy of the S3 Bucket.
Example:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Nginx:
location ~* \.(eot|ttf|woff)$ {
add_header Access-Control-Allow-Origin '*';
}
AWS S3:
Select your bucket
Click properties on the right top
Permisions => Edit Cors Configuration => Save
Save
http://schock.net/articles/2013/07/03/hosting-web-fonts-on-a-cdn-youre-going-to-need-some-cors/
On June 26, 2014 AWS released proper Vary: Origin behavior on CloudFront so now you just
Set a CORS Configuration for your S3 bucket:
<AllowedOrigin>*</AllowedOrigin>
In CloudFront -> Distribution -> Behaviors for this origin, use the Forward Headers: Whitelist option and whitelist the 'Origin' header.
Wait for ~20 minutes while CloudFront propagates the new rule
Now your CloudFront distribution should cache different responses (with proper CORS headers) for different client Origin headers.
The only thing that has worked for me (probably because I had inconsistencies with www. usage):
Paste this in to your .htaccess file:
<IfModule mod_headers.c>
<FilesMatch "\.(eot|font.css|otf|ttc|ttf|woff)$">
Header set Access-Control-Allow-Origin "*"
</FilesMatch>
</IfModule>
<IfModule mod_mime.c>
# Web fonts
AddType application/font-woff woff
AddType application/vnd.ms-fontobject eot
# Browsers usually ignore the font MIME types and sniff the content,
# however, Chrome shows a warning if other MIME types are used for the
# following fonts.
AddType application/x-font-ttf ttc ttf
AddType font/opentype otf
# Make SVGZ fonts work on iPad:
# https://twitter.com/FontSquirrel/status/14855840545
AddType image/svg+xml svg svgz
AddEncoding gzip svgz
</IfModule>
# rewrite www.example.com → example.com
<IfModule mod_rewrite.c>
RewriteCond %{HTTPS} !=on
RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC]
RewriteRule ^ http://%1%{REQUEST_URI} [R=301,L]
</IfModule>
http://ce3wiki.theturninggate.net/doku.php?id=cross-domain_issues_broken_web_fonts
I had this same problem and this link provided the solution for me:
http://www.holovaty.com/writing/cors-ie-cloudfront/
The short version of it is:
Edit S3 CORS config (my code sample didn't display properly)
Note: This is already done in the original question
Note: the code provided is not very secure, more info in the linked page.
Go to the "Behaviors" tab of your distribution and click to edit
Change "Forward Headers" from “None (Improves Caching)” to “Whitelist.”
Add “Origin” to the "Whitelist Headers" list
Save the changes
Your cloudfront distribution will update, which takes about 10 minutes. After that, all should be well, you can verify by checking that the CORS related error messages are gone from the browser.
For those using Microsoft products with a web.config file:
Merge this with your web.config.
To allow on any domain replace value="domain" with value="*"
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<system.webserver>
<httpprotocol>
<customheaders>
<add name="Access-Control-Allow-Origin" value="domain" />
</customheaders>
</httpprotocol>
</system.webserver>
</configuration>
If you don't have permission to edit web.config, then add this line in your server-side code.
Response.AppendHeader("Access-Control-Allow-Origin", "domain");
For AWS S3, setting the Cross-origin resource sharing (CORS) to the following worked for me:
[
{
"AllowedHeaders": [
"Authorization"
],
"AllowedMethods": [
"GET",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
There is a nice writeup here.
Configuring this in nginx/apache is a mistake.
If you are using a hosting company you can't configure the edge.
If you are using Docker, the app should be self contained.
Note that some examples use connectHandlers but this only sets headers on the doc. Using rawConnectHandlers applies to all assets served (fonts/css/etc).
// HSTS only the document - don't function over http.
// Make sure you want this as it won't go away for 30 days.
WebApp.connectHandlers.use(function(req, res, next) {
res.setHeader('Strict-Transport-Security', 'max-age=2592000; includeSubDomains'); // 2592000s / 30 days
next();
});
// CORS all assets served (fonts/etc)
WebApp.rawConnectHandlers.use(function(req, res, next) {
res.setHeader('Access-Control-Allow-Origin', '*');
return next();
});
This would be a good time to look at browser policy like framing, etc.
Late to the party, but I just ran into this problem and solved it with the following settings in my AWS bucket config (Permission tab). The requested format is not XML anymore but JSON:
[
{
"AllowedHeaders": [
"Content-*"
],
"AllowedMethods": [
"GET",
"HEAD"
],
"AllowedOrigins": [
"https://www.yourdomain.example",
"https://yourdomain.example"
],
"ExposeHeaders": []
}
]
Just add use of origin in your if you use node.js as server...
like this
app.use((req, res, next) => {
res.header('Access-Control-Allow-Origin', '*');
next();
});
We Need response for origin
If you want to allow all the fonts from a folder for a specific domain then you can use this:
<location path="assets/font">
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="Access-Control-Allow-Origin" value="http://localhost:3000" />
</customHeaders>
</httpProtocol>
</system.webServer>
</location>
where assets/font is the location where all fonts are and http://localhost:3000 is the location which you want to allow.
Add this to your .htaccess file. This solved my problem.
<FilesMatch ".(eot|otf|ttf|woff|woff2)">
Header always set Access-Control-Allow-Origin "*"
</FilesMatch>
Working solution for heroku is here http://kennethjiang.blogspot.com/2014/07/set-up-cors-in-cloudfront-for-custom.html
(quotes follow):
Below is exactly what you can do if you are running your Rails app in Heroku and using Cloudfront as your CDN. It was tested on Ruby 2.1 + Rails 4, Heroku Cedar stack.
Add CORS HTTP headers (Access-Control-*) to font assets
Add gem font_assets to Gemfile .
bundle install
Add config.font_assets.origin = '*' to config/application.rb . If you want more granular control, you can add different origin values to different environment, e.g., config/config/environments/production.rb
curl -I http://localhost:3000/assets/your-custom-font.ttf
Push code to Heroku.
Configure Cloudfront to forward CORS HTTP headers
In Cloudfront, select your distribution, under "behavior" tab, select and edit the entry that controls your fonts delivery (for most simple Rails app you only have 1 entry here). Change Forward Headers from "None" to "Whilelist". And add the following headers to whitelist:
Access-Control-Allow-Origin
Access-Control-Allow-Methods
Access-Control-Allow-Headers
Access-Control-Max-Age
Save it and that's it!
Caveat: I found that sometimes Firefox wouldn't not refresh the fonts even if CORS error is gone. In this case keep refreshing the page a few times to convince Firefox that you are really determined.