React + Django Axios Issues - django

I have a react application linked to a Django backend on two separate servers. I am using DRF for django and I allowed cors using django-cors-headers. For some reason when I curl POST the backend, I am able to get the request out. However when I use axios POST the backend, I get and error. The status of the POST request from axios is failed. The request and takes more than 10 seconds to complete. My code was working locally (both react and django codes), but when I deployed to AWS ec2 ubuntu, the axios requests stopped working.
Console error logs
OPTIONS http://10.0.3.98:8000/token-auth/ net::ERR_CONNECTION_TIMED_OUT
{
"config": {
"transformRequest": {},
"transformResponse": {},
"timeout": 0,
"xsrfCookieName": "XSRF-TOKEN",
"xsrfHeaderName": "X-XSRF-TOKEN",
"maxContentLength": -1,
"headers": {
"Accept": "application/json, text/plain, */*",
"Content-Type": "application/json;charset=UTF-8",
"Access-Control-Allow-Origin": "*"
},
"method": "post",
"url": "http://10.0.3.98:8000/token-auth/",
"data": "{\"username\":\"testaccount\",\"password\":\"testpassword\"}"
},
"request": {}
}
Here is my request code
axios.post('http://10.0.3.98:8000/token-auth/',
JSON.stringify(data),
{
mode: 'no-cors',
headers: {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin' : '*'
},
},
).then( res => (
console.log(JSON.stringify(res)),
)
).catch( err => (
console.log(JSON.stringify(err))
)
);
my curl code that worked
curl -d '{"username":"testaccount", "password":"testpassword"}' -H "Content-Type: application/json" -X POST http://10.0.3.98:8000/token-auth/
UPDATE 1
on firefox i am getting the warning
Cross-Origin Request Blocked: The Same Origin Policy disallows reading
the remote resource at http://10.0.3.98:8000/token-auth/. (Reason:
CORS request did not succeed).[Learn More]
UPDATE 2
Perhaps it has something to do with my AWS VPC and subnets? My django server is in a private subnet while my react app is in a public subnet.
UPDATE 3 - my idea of what the problem is
I think the reason why my requests from axios aren't working is because the requests i'm making is setting the origin of the request header to http://18.207.204.70:3000 - the public/external ip address - instead of the private/internal ip address which is http://10.0.2.219:3000 - i search online that the origin is a forbidden field so it can't be changed. How can i set the origin then? Do I have to use a proxy - how can I do that.

try this http request instead of axios, it's called superagent (https://www.npmjs.com/package/superagent) , just install it to your react app via npm,
npm i superagent
and use this instead of axios.
import request from 'superagent'
const payload ={
"1": this.state.number,
"2": this.state.message
}
request.post('LINK HERE')
.set('Content-Type', 'application/x-www-form-urlencoded')
.send(payload)
.end(function(err, res){
if (res.text==='success'){
this.setState({
msgAlert: 'Message Sent!',
})
}
else{
console.log('message failed/error')
}
});

The issue here is that the request is being made on the client browser. You need to either use a reverse proxy or request directly to the api server. You cannot do a local ssh forwarding either.

Related

How can I enable CORS on AWS resources?

express server config:
app.use(
cors({
origin: "*",
methods: "GET,PUT,POST",
allowedHeaders: "*",
exposeHeaders: "*",
optionsSuccessStatus: 200,
})
);
//this fails as well
app.use(cors());
client side request:
let value= await axios({
url: "https://my.api.here.cloud/verifyGroup",
method: "post",
data: data.value,
});
Access to XMLHttpRequest at 'https://my.api.here.cloud/verifyGroup' from origin 'https://my.frontend.here.cloud' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
I'm fairly confident it has something to do with AWS resources because in my local environment, these configs allow for cors but as soon as it's up on AWS, I'm getting blocked by cors.
I'm running an ECS Fargate service that's running a single task containing a frontend container and the backend container. Each of these containers has an ALB attached to them.
I came across this post here which gave me some hope, so I tried implementing it into my own solution but it still came up short.
my attempt at implementing the solution from the other post. This is what's being returned at an endpoint:
res.json({
headers: {
"Access-Content-Allow-Origin": "*",
},
statusCode: 200,
body: {
value: true,
},
});

AWS API gateways CORS error when reading from post body

I have an application written in React JS, running on localhost, which makes API calls to API Gateway in AWS. API Gateway forwards requests to a lambda function, which returns a response. I have enabled CORS on AWS side.
At the moment whenever I click a 'request' button in the application I get a response from the gateway. Here is my current python code:
import json
def lambda_handler(event, context):
response = {}
response['result'] = "Success"
response['message'] = "Updated successfully!"
return {
'headers': {
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'POST'
},
"body": json.dumps(response)
}
And here's the body of the request:
{
"ID": "1101",
"RequestDate": "2021-02-28"
}
This works fine. I get the 'message' value from this response and can display it without problems.
Next I want to display the information containing some data coming from the request. For example instead of Updated successfully I would like to get the RequestDate from the request and return Updated successfully on 2021-02-28.
I added these two lines:
def lambda_handler(event, context):
body = json.loads(event['body'])
request_date = body['RequestDate']
response = {}
response['result'] = "Success"
response['message'] = "Updated successfully!"
return {
'headers': {
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'POST'
},
"body": json.dumps(response)
}
As soon as I make this change I get the following code in my application:
Access to fetch at url from origin 'http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
This only happens when I add request_date = body['RequestDate']. I tried returning the body only and it was working fine as well.
In my react js application I add following headers as well:
async callAPI(url, method, data) {
let result = await fetch(url, {
method: method,
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*'
},
body: JSON.stringify({
data
})
})
return result.json().then(body => this.notify(body['message']));
}
I tried enabling CORS and deploying the resource again but to no avail. I have added Access-Control-Allow-Origin to allowed headers in AWS. As I mentioned it works fine with post method prior to adding that one line. What could be wrong here and how can I remedy it?
EDIT:
One more thing. I get this error only from my application running on localhost. Curling or using any REST client works fine.
EDIT2:
Added fetch code
Setting up CORS in Lambda is dependent on how you setup API Gateway. API Gateway has several modes [REST, HTTP, WebSocket]. In the case of REST, API Gateway does some pre-processing on the incoming request like parameter validation before passing to Lambda. HTTP Proxy is just that, a pass through to Lambda, and Websockets is not really for this discussion.
I am assuming it is because you are using API Gateway in standard configuration and you have not enabled CORS on API Gateway. The code you provided above will work for API Gateway Configured for HTTP.
If you are using the CDK, or CloudFormation then you must configure CORS there, else the easiest is to use the console.
Go to your API in the AWS Console, select resources, select your method or service, from the actions menu enable CORS. And then publish your updated API.
Here is a link to the AWS documentation that outlines how to do it.
Some advice, when testing endpoints via a browser it is best not to use Localhost or 127.0.0.1 it has unintended consequences. Edit your hosts file and give yourself a domain name, and use that domain name instead.
CORS is there for a reason to prevent Cross Site Scripting or Cross Site Origin Attacks. Try not to use *, testing sure, but production no. If you have code say on S3, and your REST services on say API Gateway you can front both with CloudFront using the same Origin, and let CloudFront know to forward based on URL (e.g /API) to APIGateway and the other to S3. Thus everything has the same domain.
This is what I have on my server and it works fine. Using fetch on the front-end.
Server Code:
import json
result = {"hello": "world"}
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json',
"Access-Control-Allow-Origin": "*"
},
'body': json.dumps(result)
}
Front-end Code using ES6
url = "Your url"
path = "Your desired path"
data = {"hello": "world"}
const response = await fetch(url + path, {
method: "POST",
cache: "no-cache",
mode: "cors",
body: JSON.stringify(data),
});
Add statusCode inside response from lambda
return {
'statusCode': 200,
'headers': {
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'POST'
},
'body': json.dumps(response)
}
In case there is any error in your lambda it will return default 5XX response without any CORS headers and in such cases browser will complain for cors headers not found.
You can add your current code in try block and in except block you can print the error and return some default response like below
return {
'statusCode': 500,
'headers': {
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'POST'
},
'body': json.dumps({'message':'Unexpected error'})
}
Add some log statements in your lambda and check api gateway configuration for Lambda proxy setting.
Make sure you check your api gateway logs and lambda function logs for some more details regarding error.

Vue PWA login works in dev but returns 401 in production

I have a Vue.js PWA with a Django Rest Framework backend which works correctly locally on my laptop (using a browser). When I deploy it to production it continues to work correctly when I log in using a browser, however it fails to login when opened as a PWA (ie: on a phone or a PWA saved in a browser).
Here's my login code:
axios
.post("/api/get-token/", user)
.then(res => {
localStorage.setItem('user-token', res.data.token);
axios.defaults.headers.common['Authorization'] = res.data.token;
commit(AUTH_SUCCESS, res.data);
resolve(res);
})
.catch(err => {
commit(AUTH_ERROR, err);
reject(err);
});
As mentioned, everything works locally and in production when logging in via a browser. The problem comes when trying to log in using the PWA.
When trying to login to the PWA, I get the following:
POST https://www.example.com/api/get-token/ 401 (Unauthorized)
Doing a console log of the error received from the server I get:
{
detail: "Invalid token header. No credentials provided."
__proto__: Object
status: 401
statusText: "Unauthorized"
headers: {allow: "POST, OPTIONS", connection: "keep-alive", content-length: "59", content-type: "application/json", date: "Thu, 06 Feb 2020 15:00:11 GMT", …}
config:
url: "/api/get-token/"
method: "post"
data: "{"username":"test#example.com","password":"password"}"
headers:
Accept: "application/json, text/plain, */*"
Authorization: "Token "
Content-Type: "application/json;charset=utf-8"
__proto__: Object
transformRequest: [ƒ]
transformResponse: [ƒ]
timeout: 0
adapter: ƒ (t)
xsrfCookieName: "csrftoken"
xsrfHeaderName: "X-CSRFToken"
maxContentLength: -1
validateStatus: ƒ (t)
}
In production, the following works:
Log into the site using a browser on my laptop or on a phone.
Then open the PWA. This works correctly and I can continue using the PWA.
The only issue comes when trying to log in using the PWA.
Can you log in on a phone locally? I had this problem too once, the problem was that the frontend and backend were not running on the same host. This solved my problem:
devServer: {
proxy: {
'/api': {
target: 'http://localhost:5000'
}
}
}
I eventually figured out the issue. For some reason the following was being POSTed in the header: Authorization: "Token ".
This is really strange because when logging in using the /api/get-token/ there is no token required since this is the login route. Also, it works perfectly from a browser. The only issue is when trying from a PWA.
Anyway, changing the header to explicitly have no value for Authorization fixed the issue as follows: Authorization: ""

Django, Heroku, boto: direct file upload to Google cloud storage

In Django projects deployed on Heroku, I used to upload files to Google cloud storage via boto. However, recently I have to upload large files which will cause Heroku timeout.
I am following Heroku's documentation about direct file upload to S3, and customizing as follows:
Python:
conn = boto.connect_gs(gs_access_key_id=GS_ACCESS_KEY,
gs_secret_access_key=GS_SECRET_KEY)
presignedUrl = conn.generate_url(expires_in=3600, method='PUT', bucket=<bucketName>, key=<fileName>, force_http=True)
JS:
url = 'https://<bucketName>.storage.googleapis.com/<fileName>?Signature=...&Expires=1471451569&GoogleAccessId=...'; // "presignUrl"
postData = new FormData();
postData.append(...);
...
$.ajax({
url: url,
type: 'PUT',
data: postData,
processData: false,
contentType: false,
});
I got the following error message:
XMLHttpRequest cannot load http:/... Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:8000' is therefore not allowed access.
EDIT:
The output of gsutil cors get gs://<bucketName>:
[{"maxAgeSeconds": 3600, "method": ["GET", "POST", "HEAD", "DELETE", "PUT"], "origin": ["*"], "responseHeader": ["Content-Type"]}]
It seems the CORS is OK. So, how do I solve the problem? Thanks.
EDIT 2:
The header of the OPTION request from Firefox:
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: zh-TW,zh;q=0.8,en-US;q=0.5,en;q=0.3
Access-Control-Request-Method: PUT
Connection: keep-alive
Host: <bucketName>.storage.googleapis.com
Origin: http://localhost:8000
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:48.0) Gecko/20100101 Firefox/48.0
The header of the OPTION request from Chrome:
Accept:*/*
Accept-Encoding:gzip, deflate, sdch
Accept-Language:zh-TW,zh;q=0.8,en;q=0.6,en-US;q=0.4,zh-CN;q=0.2
Access-Control-Request-Headers:
Access-Control-Request-Method:PUT
Connection:keep-alive
Host:directupload.storage.googleapis.com
Origin:http://localhost:8000
Referer:http://localhost:8000/
User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36
X-Client-Data:CIe2yQEIprbJAQjznMoB
The header issue is not coming from your app, I think it's coming from the cloud storage bucket. I had the same issue when setting up an api, the resource you are posting to is missing the header.
https://cloud.google.com/storage/docs/cross-origin
While useful for preventing malicious behavior, this security measure also prevents useful and legitimate interactions between known origins. For example, a script on a page hosted from Google App Engine at example.appspot.com might want to use static resources stored in a Cloud Storage bucket at example.storage.googleapis.com. However, because these are two different origins from the perspective of the browser, the browser won't allow a script from example.appspot.com to fetch resources from example.storage.googleapis.com using XMLHttpRequest because the resource being fetched is from a different origin.
So it looks like you need to configure the bucket to allow cors requests. The google documentation shows the following code to be run from the google cli.
https://cloud.google.com/storage/docs/cross-origin#Configuring-CORS-on-a-Bucket
gsutil cors set cors-json-file.json gs://example
[
{
"origin": ["http://mysite.heroku.com"],
"responseHeader": ["Content-Type"],
"method": ["GET", "HEAD", "DELETE", "PUT"],
"maxAgeSeconds": 3600
}
]
Which would allow you get, upload, and delete content. Hope that helps.
Based on the information in EDIT 2, something is wrong with the request. The preflight (OPTIONS) request includes the header ACCESS-CONTROL-REQUEST-HEADER. This is not a valid CORS header. The correct header is ACCESS-CONTROL-REQUEST-HEADERS, notice the 'S' at the end.
Even if the header was correct, it should not be requesting authorization for a access-control-allow-origin header. ACCESS-CONTROL-ALLOW-ORIGIN is not a header that is sent from the client. It is a header that will automatically be sent in the response from the server to the client when the server gets a preflight request. The client/browser will not allow a cross-origin PUT request unless it gets a ACCESS-CONTROL-ALLOW-ORIGIN header authorizing the browser document's current origin from the cross-origin server in the preflight request.
The presence of the bad header appears to correlate well with the error response you are receiving. However, it looks like that header was probably not in your original code, it looks like you added it later (based on your comments). Make sure to take that header config out, it is definitely not correct.
So I am a little confused about where that header is coming from, but I think it is the source of your problem.
It looks like you are using jQuery to make the AJAX PUT request. All I can really suggest is to make sure you haven't called $.ajaxSetup() somewhere in your JS code that might be configuring the bad header.
After so many trials and errors, I came up with the following. The programs worked, however, sometimes/some of the uploaded images are not visible; other times they are OK. I have no idea why this happened.
I'd like to solicit more ideas why file uploads are OK but some of the images are corrupted.
gsutil commands:
gsutil cors set cors.json gs://<bucketName>
gsutil defacl ch -u allUsers:R gs://<bucketName>
Content of cors.json file:
[
{
"origin": ["*"],
"responseHeader": ["Content-Type"],
"method": ["GET", "POST", "HEAD", "DELETE", "PUT"],
"maxAgeSeconds": 3600
}
]
HTML:
<p id=status>Choose your avatar:</p>
<input id=fileInput type=file>
JavaScript:
$(document).on('change', '#fileInput', function() {
var $this = $(this);
var file = $this[0].files[0];
$.ajax({
url: 'upload/sign/?fileName=' + file.name + '&contentType=' + file.type,
type: 'GET'
})
.done(function(data) {
var response = JSON.parse(data);
uploadFile(file, response.presignedUrl, response.url, response.contentType)
})
.fail(function() {
alert('Unable to obtain a signed URL.');
});
});
function uploadFile(file, presignedUrl, url, contentType) {
var postData = new FormData();
postData.append('file', file);
$.ajax({
url: presignedUrl,
type: 'PUT',
data: postData,
headers: {
'Content-Type': contentType,
},
processData: false,
contentType: false
})
.done(function() {
alert('File upload successful');
})
.fail(function() {
alert('Unable to upload the file.');
});
}
Django:
Project's urls.py:
urlpatterns = [
...
url(r'upload/', include('upload.urls', namespace='upload')),
]
App's urls.py:
urlpatterns = [
url(r'^$', views.upload, name='upload'),
url(r'^sign/', views.sign, name='sign'),
]
views.py:
def upload(request):
# ... render the template
def sign(request):
fileName = request.GET.get('fileName')
contentType = request.GET.get('contentType')
conn = boto.connect_gs(gs_access_key_id=GS_ACCESS_KEY,
gs_secret_access_key=GS_SECRET_KEY)
presignedUrl = conn.generate_url(3600, 'PUT', GS_BUCKET_NAME, fileName, headers={'Content-Type':contentType})
return HttpResponse(
json.dumps({
'presignedUrl': presignedUrl,
'url': GS_URL + fileName,
'contentType': contentType
})
)
I’m my experience I would like to note “It is not possible to by-pass heroku 30s timeout without using javascript AWS SDK”. Don’t use python AWS SDK (boto). You have to completely leave the back-end out of this. Now from your access origin error, the solution is your CORS. You should put this on your CORS Policy:
[
{
"AllowedHeaders": [
""
],
"AllowedMethods": [
"GET",
"PUT",
"POST",
"DELETE",
"HEAD"
],
"AllowedOrigins": [
""
],
"ExposeHeaders": [
"ETag"
]
}
]
Next for javascript AWS SDK. Follow my answer here: Upload file to s3 in front-end with JavaScript AWS SDK on django
There’s a lot missing from the answer I linked as I had to come up with a custom solution because Javascript AWS SDK also passes heroku 30s timeout. What I did was upload the video via javascript SDK than pass the videos ‘AWS url’ another view in a two step django form. With the changing of djnago views I reset heroku 30s timeout with the video already in my s3 bucket, and passed the fileKey to my url with the redirect. On the second part of the form I gain other information for my djnago object than submit it. This was so hard going through all documentation of direct upload to s3. If anyone is reading this and need help please comment for more. I’m on my phone now but I’ll kindly respond from my desktop to post code snippets ✌🏾

POST request from postman 4.2.2 to couchdb in localhost does not return cookie

I am sending a POST request to my local test_db (couchDB database):
POST http://localhost:1970/_session
using the following values in the body request:
x-www-form-urlencoded
name: admin
password: xxxxxx
the request executes correctly but in the response I do not get a cookie:
{
"ok": true,
"name": null,
"roles": [
"_admin",
"dbadmin"
]
}
do you know why?
Thank you for your help.
According to the documentation, the data returned is correct. The cookie is inside the response headers. Here's an example of how to retrieve it with PHP.