I'm trying to call an AWS hosted API from my VueJS app, which is running on my localhost:8080. I have used this blog post to setup the vue.config.js with this block:
module.exports = {
devServer: {
proxy: 'https://0123456789.execute-api.eu-west-1.amazonaws.com/'
},
...
}
With this in place, I can use this code to make a GET request to an endpoint at that host:
this.$axios
.get('https://0123456789.execute-api.eu-west-1.amazonaws.com/mock/api/endpoint',
{
headers: {
'Content-Type': 'application/json'
}})
This is because I have configured the AWS API Gateway mock endpoint to return these headers for the OPTIONS method:
Access-Control-Allow-Headers: 'Cache-Control,Expires,Pragma,Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'
Access-Control-Allow-Methods: 'DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT'
Access-Control-Allow-Origin: '*'
However, I cannot make this call:
this.$axios
.get('https://0123456789.execute-api.eu-west-1.amazonaws.com/lambda/api/function',
{
headers: {
'Content-Type': 'application/json'
}})
This endpoint is a Lambda integration and also has an OPTIONS method with the same headers as above.
Why should both endpoints, configured the same way, have different responses for axios?
UPDATE
As advised by #deniz, I have updated the .env.development file to contain:
VUE_APP_API_URI=https://0123456789.execute-api.eu-west-1.amazonaws.com/
I have also updated the axios requests to:
let url = 'mock/api/endpoint'
let headers = {
headers: {
'Content-Type': 'application/json',
},
}
this.$axios
.get(url, headers)
...and...
let url = 'lambda/api/function'
let headers = {
headers: {
'Content-Type': 'application/json',
},
}
this.$axios
.get(url, headers)
The result I get for the first GET request is:
200 OK
However the second request's response is:
Access to XMLHttpRequest at 'https://0123456789.execute-api.eu-west-1.amazonaws.com/lambda/api/function' from origin 'http://localhost:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Your config for your dev env. as a proxy setup is doing nothing else then pretend to be someone else.
Thats why you dont get any CORS issues when you work with a proxy. its a kinda bottleneck which acts like "i am someone else, not localhost"
module.exports = {
devServer: {
proxy: 'https://0123456789.execute-api.eu-west-1.amazonaws.com/'
},
...
}
from now on all your requests came from this very proxy based URL
https://0123456789.execute-api.eu-west-1.amazonaws.com/
if you try to access the api like this:
this.$axios
.get('https://0123456789.execute-api.eu-west-1.amazonaws.com/lambda/api/function',
{
headers: {
'Content-Type': 'application/json'
}})
you should keep in mind that you are already pretend that your proxy is doing his desguise stuff and still acts like its from a other source.
your URL when you call the API looks like this now, if i am not completely wrong:
https://0123456789.execute-api.eu-west-1.amazonaws.com/https://0123456789.execute-api.eu-west-1.amazonaws.com/lambda/api/function
all you have to do is change the axios url in your request to:
this.$axios
.get('lambda/api/function',
{
headers: {
'Content-Type': 'application/json'
}})
and try again.
UPDATE
VUE_APP_API_URI=https://0123456789.execute-api.eu-west-1.amazonaws.com/
wrap your URL string into quotes, like this and remove the last slash.
VUE_APP_API_URI='https://0123456789.execute-api.eu-west-1.amazonaws.com'
thats a common practice to handle .env vars.
2.
the CORS error you get is a result of not using proxy anymore.
your requesting data from a other source now and this is no allowed on modern browsers like FireFox or Chrome etc.
here you have to handle the server side configs in your API:
https://0123456789.execute-api.eu-west-1.amazonaws.com
because if you go like that you need to give your localhost and your backend the permission to handle requests if the requests are made from different sources, like in your case:
i am localhost and i request data from https://0123456789.execute-api.eu-west-1.amazonaws.com
normally this is forbidden and is a highly risk on security
But the solution is...
As you did before in your AWS API
Access-Control-Allow-Origin: '*' is the important part which handles your "CORS" issues.
make sure it is setup correct and works as intended. maybe play around with that and set localhost instead of * (allow for all)
3.
i highly recommend you to use the proxy way on development and use the non proxy way only for production, and just allow CORS for your frontend only.
Related
In my Postman collection, I have a pre-request script that ensures I have a valid JWT token available for authentication. It looks similar to the following (I have removed the logic of checking expiration and only fetching a new token if needed):
function get_and_set_jwt() {
let base_url = pm.environment.get("BASE_URL")
pm.sendRequest({
url: base_url + '/api/auth/',
method: 'POST',
header: {
'content-type': 'application/json',
'cookie': ''
},
body: {
mode: 'raw',
raw: JSON.stringify({ email: pm.environment.get("USER_EMAIL_ADDRESS"), password: pm.environment.get("USER_PASSWORD") })
}
}, function (err, res) {
let jwt = res.json().token
postman.setEnvironmentVariable("JWT", jwt)
});
}
get_and_set_jwt();
I am attempting to set 'cookie': '' so that the request from this script will be made with no cookies. The backend I am working with sets a session cookie in addition to returning the JWT, but I want to force all future requests (when I need to renew the JWT) to not include that session information.
Unfortunately, if I check the Postman console, I see that the requests are still being sent with the cookie header, and the session cookie that was set by the earlier response. I have even tried overriding it by setting 'cookie': 'sessionid=""', but that just yields a request that includes two session ids in the cookie header (it looks like sessionid=""; sessionid=fasdflkjawew123sdf123;)
How can I send a request with pm.sendRequest with either a completely blank cookie header, or without the header at all?
I'm building an api with api platform and a front with react (using the react template of apiplatform). I configured authentification and a return to client with httponly cookie which contains the jwt. But when my front does a request, it does not send this cookie... And I absolutly don't know why, I thought it was automaticaly done by browser till it's on same domain.
Here is an example of the network history from my client :
my app is running on https://localhost:3000/
Do you see something wrong in theses request ? Or does anyone has an idea of what it could come from ?
My app and api are using https and have a valid certificate...
If you need any additional info, feel free to ask, and thanks all !!!
I assume you work with either xhr or fetch.
Cookies ignore ports, but cross origin policy does not.
You work with two urls (http://localhost:8443 and http://localhost:3000). So your app is making cross origin request because ports differ.
xhr requires to set its withCredentials property to true in order to send cookies with cross-origin request.
fetch requires its credentials parameter to be set to include.
Server side, set the Access-Control-Allow-Credentials to true.
Also note that your cookie is samesite=strict. In production, if you use two domains for your app and your api, it will never be sent.
The real question here is why using a cookie instead of Authorization header ?
Ok, I didn't know... I've found nothing on it when I was trying to solve my prob.
I'm using cookie httponly because :
I want to try it :D
Lot of security articles says that it's more secure because client api can't access theses cookies, browser manages it. It seems to counter xss and stealth of cookies, but if my cookie is stored with localforage, I think I do not have this problem, but with localStorage I do, no ?
It's cool no ! I've done too many project with classic bearer auth, I can improve it now
A big thanks for your nice answer rugolinifr !
Okay, I'm still having my issue finally... My browser is not sending the cookie...
My auth request returning bearer cookie (valid, tested with postman)
My cookie received from auth request
My GET request without that auth cookie
I'm missing something but I don't find it...
I've set credentials, Access-Control-Allow-Credentials, samesite is 'none' for sending it everywhere. Is there something else to do ? Or maybe I'm doing a stupid little thing that is wrong ?
I can't answer in comment because there's code...
So, It's managed by the react admin base of api-platform (https://api-platform.com/docs/admin/), but my config is like this :
const fetchHeaders = {
credentials: 'include',
};
const fetchHydra = (url, options = {}) =>
baseFetchHydra(url, {
...options,
headers: new Headers(fetchHeaders),
});
const apiDocumentationParser = (entrypoint) =>
parseHydraDocumentation(entrypoint, { headers: new Headers(fetchHeaders) }).then(
({ api }) => ({ api }),
(result) => {
...
},
);
const dataProvider = baseHydraDataProvider(entrypoint, fetchHydra, apiDocumentationParser, true);
So, all get, post etc request for datas are based on this conf
But my first call for authentication is done like that :
login: ({ username, password }) => {
const request = new Request(`${entrypoint}/authentication_token`, {
method: 'POST',
body: JSON.stringify({ username, password }),
headers: new Headers({ 'Content-Type': 'application/json' }),
});
return fetch(request).then((response) => {
if (response.status < 200 || response.status >= 300) {
localStorage.removeItem('isAuthenticated');
throw new Error(response.statusText);
}
localStorage.setItem('isAuthenticated', 'true');
});
},
ok, I've found solution :
add credentials to the auth request, if header is not added, cookie won't be stored by browser.
And second point :
const fetchHydra = (url, options = {}) =>
baseFetchHydra(url, {
...options,
credentials: 'include',
});
credentials: 'include' is not in headers option... Nice !
Faced the same problem.Tried out many solutions but didn't work.At last found out it was the cors configuration of node backend that was causing the problem. Configured cors like the following way to solve the problem.
const corsConfig = {
origin: true,
credentials: true,
};
app.use(cors(corsConfig));
app.options('*', cors(corsConfig));
Is there a way to redirect the user to the mobile version of a web app say m.foobar.com based on the User Agent header using CloudFront?
I did read up on header caching using the user's device type using CloudFront-Is-Mobile-Viewer header. But, I can only whitelist it if I'm using a custom origin to serve my assets (ELB or an EC2 instance). In such a scenario, I could edit my server configuration to handle the redirection.
However, I'm using S3 to serve my application now and would prefer a solution within the CloudFront/S3 ecosystem.
Edit:
For S3 distributions, I DONOT have access to the CloudFront-Is-Mobile-Viewer and other CF headers.
Any help, pointers would be greatly appreciated!
Background Material: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.html
https://aws.amazon.com/blogs/aws/enhanced-cloudfront-customization/
Here's how I'd solve it.
Lambda#Edge Function
'use strict';
exports.handler = (event, context, callback) => {
/*
* If mobile, redirect to mobile domain
*/
const isMobileHeader = 'CloudFront-Is-Mobile-Viewer'
const request = event.Records[0].cf.request;
const headers = request.headers;
let response = event.Records[0].cf.response;
if (headers[isMobileHeader.toLowerCase()] && headers[isMobileHeader.toLowerCase()] == "true") {
response = {
status: '302',
statusDescription: 'Found',
headers: {
location: [{
key: 'Location',
value: 'http://m.foobar.com',
}],
},
};
callback(null, response);
};
CloudFront Distribution
Behaviours:
Default:
Cache Based on Selected Request Headers: Whitelist
Whitelist Headers:
- CloudFront-Is-Mobile-Viewer
Lambda Function Associations:
Event Type: Viewer Response
Lambda Function ARN: [ARN of function from Lambda#Edge Function]
Further Reading
Lambda#Edge
Lambda#Edge Example functions
Edit 1
Turns out S3 Origins, as Sanjay pointed out are limited to a select set of headers for caching.
My suggestion for this would be to change from an S3 Origin, to a Custom Origin, using S3 Static Website hosting, which we can then target as a Custom Origin.
S3 Bucket Configuration
S3 Bucket:
Properties:
Static Website Hosting: Use this bucket to host a website
Note the Endpoint name that you are given on this page, you will need it for the next step.
CloudFront Updates
Origins:
Create Origin:
Origin Domain Name: [Endpoint from above]
Origin ID: Custom-S3StaticHosting
Behaviours:
Default:
Origin: Custom-S3StaticHosting
Here is how I would solve it.
You don't need to perform a redirect for mobile apps. (Avoid redirect when possible) You can use the same url to serve desktop or mobile contents.
In your cloudfront whitelist, just whitelist CloudFront-Is-Mobile-Viewer header. That will cache the contents based on your device.
Implement Viewer Request Lambda Edge and add it to CloudFront.
Lambda Edge is to program pop or CloudFront before the request gets to server.
In the LambdaEdge, verify the User-Agent header and classify whether you want to serve mobile or desktop contents. If mobile, then you can change the origin url to serve from mobile contents, else you can change it to desktop contents or default content.
You get your http headers in the User Request LambdaEdge.
Lambda Edge Documentation:
http://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html
Sample node implementation is available on the reference page.
If you really want to perform a redirect, you can do that with viewer response and make decision-based on the device header received.
A sample implementation of viewer response is covered in this blog,
https://aws.amazon.com/blogs/aws/lambdaedge-intelligent-processing-of-http-requests-at-the-edge/
The above implementation just spits back all the headers that it received, instead of sending 200 OK, the code need to be modified for 3xx status with the redirect location.
Hope it helps.
Currently at viewer response we don't have access to CloudFront-Is-X-Viewer headers even after setting them as whitelisted and moreover status header is read-only. I've solved it by triggering at origin request:
exports.handler = (event, context, callback) => {
const name = 'cloudfront-is-mobile-viewer';
const request = event.Records[0].cf.request;
const headers = request.headers;
if (headers[name] && headers[name][0].value == "true") {
callback(null, {
status:'302',
statusDescription: 'Found',
headers: {
location: [{
key: 'Location',
value: `http://m.example.com${request.uri}`,
}]
}
})
}
callback(null, request);
};
How to add decoder with this code for nodejs, i use this code but its give me a string of character(How to decode it URi string)
Sorry for i add it to answer because i have low reputation
exports.handler = (event, context, callback) => {
const name = 'cloudfront-is-mobile-viewer';
const request = event.Records[0].cf.request;
const headers = request.headers;
if (headers[name] && headers[name][0].value == "true") {
callback(null, {
status:'302',
statusDescription: 'Found',
headers: {
location: [{
key: 'Location',
value: `http://m.example.com${request.uri}`,
}]
}
})
}
callback(null, request);
};
I just finished the Hello World Google Cloud Functions tutorial and received the following response headers:
Connection → keep-alive
Content-Length → 14
Content-Type → text/plain; charset=utf-8
Date → Mon, 29 Feb 2016 07:02:37 GMT
Execution-Id → XbT-WC9lXKL-0
Server → nginx
How can I add the CORS headers to be able to call my function from my website?
here we go:
exports.helloWorld = function helloWorld(req, res) {
res.set('Access-Control-Allow-Origin', "*")
res.set('Access-Control-Allow-Methods', 'GET, POST');
if (req.method === "OPTIONS") {
// stop preflight requests here
res.status(204).send('');
return;
}
// handle full requests
res.status(200).send('weeee!);
};
then you can jquery/whatever it as usual:
$.get(myUrl, (r) => console.log(r))
I'm the product manager for Google Cloud Functions. Thanks for your question, this has been a popular request.
We don't have anything to announce just yet, but we're aware of several enhancements that need to be made to the HTTP invocation capabilities of Cloud Functions and we'll be rolling out improvements to this and many other areas in future iterations.
UPDATE:
We've improved the way you deal with HTTP in Cloud Functions. You now have full access to the HTTP Request/Response objects so you can set the appropriate CORS headers and respond to pre-flight OPTIONS requests (https://cloud.google.com/functions/docs/writing/http)
UPDATE (2022):
Just noticed there was a question about docs, and our docs have moved. Updated docs for CORS are here:
https://cloud.google.com/functions/docs/samples/functions-http-cors
You can use the CORS express middleware.
package.json
npm install express --save
npm install cors --save
index.js
'use strict';
const functions = require('firebase-functions');
const express = require('express');
const cors = require('cors')({origin: true});
const app = express();
app.use(cors);
app.get('*', (req, res) => {
res.send(`Hello, world`);
});
exports.hello = functions.https.onRequest(app);
I've just created webfunc. It's a lightweight HTTP server that supports CORS as well as routing for Google Cloud Functions. Example:
const { serveHttp, app } = require('webfunc')
exports.yourapp = serveHttp([
app.get('/', (req, res) => res.status(200).send('Hello World')),
app.get('/users/{userId}', (req, res, params) => res.status(200).send(`Hello user ${params.userId}`)),
app.get('/users/{userId}/document/{docName}', (req, res, params) => res.status(200).send(`Hello user ${params.userId}. I like your document ${params.docName}`)),
])
In your project's root, simply add a appconfig.json that looks like this:
{
"headers": {
"Access-Control-Allow-Methods": "GET, HEAD, OPTIONS, POST",
"Access-Control-Allow-Headers": "Origin, X-Requested-With, Content-Type, Accept",
"Access-Control-Allow-Origin": "*",
"Access-Control-Max-Age": "1296000"
}
}
Hope this helps.
In the python environment, you can use the flask request object to manage CORS requests.
def cors_enabled_function(request):
if request.method == 'OPTIONS':
# Allows GET requests from any origin with the Content-Type
# header and caches preflight response for an 3600s
headers = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET',
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Max-Age': '3600'
}
return ('', 204, headers)
# Set CORS headers for the main request
headers = {
'Access-Control-Allow-Origin': '*'
}
return ('Hello World!', 200, headers)
See the gcloud docs for more.
You need to send an 'OPTIONS' response by setting its header as follows:
if (req.method === 'OPTIONS') {
res.set('Access-Control-Allow-Methods', '*');
res.set('Access-Control-Allow-Headers', '*');
res.status(204).send('');
}
Runtime: NodeJS 10
If you tried the accepted answer but encountered a preflight error, the docs offer examples of handling it in multiple languages, with the caveat that it only works on public functions, i.e. deployed with --allow-unauthenticated:
exports.corsEnabledFunction = (req, res) => {
res.set("Access-Control-Allow-Origin", "*");
if (req.method === "OPTIONS") {
/* handle preflight OPTIONS request */
res.set("Access-Control-Allow-Methods", "GET, POST");
res.set("Access-Control-Allow-Headers", "Content-Type");
// cache preflight response for 3600 sec
res.set("Access-Control-Max-Age", "3600");
return res.sendStatus(204);
}
// handle the main request
res.send("main response");
};
Another option is to use Express as shown in this post, complete with cross-origin enabled.
You must enable CORS within all your functions, for example hello function:
index.js
const cors = require('cors')();
// My Hello Function
function hello(req, res) {
res.status(200)
.send('Hello, Functions');
};
// CORS and Cloud Functions export
exports.hello = (req, res) => {
cors(req, res, () => {
hello(req, res);
});
}
Don't forget about package.json
package.json
{
"name": "function-hello",
"version": "0.1.0",
"private": true,
"dependencies": {
"cors": "^2.8.5"
}
}
After applying your favourite answer from here, if you're still getting this error, check for uncaught errors in your cloud function. This can result in the browser receiving a CORS error, even when your error has nothing to do with CORS
After CORS enabled if you send POST request to your function also check for your request Content-Type header, mine was set it to "text/plain" and my browser was constantly triggering CORS errors, after setting the header to "application/json" everything worked properly.
I am trying out the new Fetch API but is having trouble with Cookies. Specifically, after a successful login, there is a Cookie header in future requests, but Fetch seems to ignore that headers, and all my requests made with Fetch is unauthorized.
Is it because Fetch is still not ready or Fetch does not work with Cookies?
I build my app with Webpack. I also use Fetch in React Native, which does not have the same issue.
Fetch does not use cookie by default. To enable cookie, do this:
fetch(url, {
credentials: "same-origin"
}).then(...).catch(...);
In addition to #Khanetor's answer, for those who are working with cross-origin requests: credentials: 'include'
Sample JSON fetch request:
fetch(url, {
method: 'GET',
credentials: 'include'
})
.then((response) => response.json())
.then((json) => {
console.log('Gotcha');
}).catch((err) => {
console.log(err);
});
https://developer.mozilla.org/en-US/docs/Web/API/Request/credentials
Have just solved. Just two f. days of brutforce
For me the secret was in following:
I called POST /api/auth and see that cookies were successfully received.
Then calling GET /api/users/ with credentials: 'include' and got 401 unauth, because of no cookies were sent with the request.
The KEY is to set credentials: 'include' for the first /api/auth call too.
If you are reading this in 2019, credentials: "same-origin" is the default value.
fetch(url).then
Programmatically overwriting Cookie header in browser side won't work.
In fetch documentation, Note that some names are forbidden. is mentioned. And Cookie happens to be one of the forbidden header names, which cannot be modified programmatically. Take the following code for example:
Executed in the Chrome DevTools console of page https://httpbin.org/, Cookie: 'xxx=yyy' will be ignored, and the browser will always send the value of document.cookie as the cookie if there is one.
If executed on a different origin, no cookie is sent.
fetch('https://httpbin.org/cookies', {
headers: {
Cookie: 'xxx=yyy'
}
}).then(response => response.json())
.then(data => console.log(JSON.stringify(data, null, 2)));
P.S. You can create a sample cookie foo=bar by opening https://httpbin.org/cookies/set/foo/bar in the chrome browser.
See Forbidden header name for details.
Just adding to the correct answers here for .net webapi2 users.
If you are using cors because your client site is served from a different address as your webapi then you need to also include SupportsCredentials=true on the server side configuration.
// Access-Control-Allow-Origin
// https://learn.microsoft.com/en-us/aspnet/web-api/overview/security/enabling-cross-origin-requests-in-web-api
var cors = new EnableCorsAttribute(Settings.CORSSites,"*", "*");
cors.SupportsCredentials = true;
config.EnableCors(cors);
This works for me:
import Cookies from 'universal-cookie';
const cookies = new Cookies();
function headers(set_cookie=false) {
let headers = {
'Accept': 'application/json',
'Content-Type': 'application/json',
'X-CSRF-Token': $('meta[name="csrf-token"]').attr('content')
};
if (set_cookie) {
headers['Authorization'] = "Bearer " + cookies.get('remember_user_token');
}
return headers;
}
Then build your call:
export function fetchTests(user_id) {
return function (dispatch) {
let data = {
method: 'POST',
credentials: 'same-origin',
mode: 'same-origin',
body: JSON.stringify({
user_id: user_id
}),
headers: headers(true)
};
return fetch('/api/v1/tests/listing/', data)
.then(response => response.json())
.then(json => dispatch(receiveTests(json)));
};
}
My issue was my cookie was set on a specific URL path (e.g., /auth), but I was fetching to a different path. I needed to set my cookie's path to /.
If it still doesn't work for you after fixing the credentials.
I also was using the :
credentials: "same-origin"
and it used to work, then it didn't anymore suddenly, after digging much I realized that I had change my website url to http://192.168.1.100 to test it in LAN, and that was the url which was being used to send the request, even though I was on http://localhost:3000.
So in conclusion, be sure that the domain of the page matches the domain of the fetch url.