How to block external requests to a NextJS Api route - django

I am using NextJS API routes to basically just proxy a custom API built with Python and Django that has not yet been made completely public, I used the tutorial on Vercel to add cors as a middleware to the route however it hasn't provided the exact functionality I wanted.
I do not want to allow any person to make a request to the route, this sort of defeats the purpose for but it still at least hides my API key.
Question
Is there a better way of properly stopping requests made to the route from external sources?
Any answer is appreciated!
// Api Route
import axios from "axios";
import Cors from 'cors'
// Initializing the cors middleware
const cors = Cors({
methods: ['GET', 'HEAD'],
allowedHeaders: ['Content-Type', 'Authorization','Origin'],
origin: ["https://squadkitresearch.net", 'http://localhost:3000'],
optionsSuccessStatus: 200,
})
function runMiddleware(req, res, fn) {
return new Promise((resolve, reject) => {
fn(req, res, (res) => {
if (res instanceof Error) {
return reject(res)
}
return resolve(res)
})
})
}
async function getApi(req, res) {
try {
await runMiddleware(req, res, cors)
const {
query: { url },
} = req;
const URL = `https://xxx/api/${url}`;
const response = await axios.get(URL, {
headers: {
Authorization: `Api-Key xxxx`,
Accept: "application/json",
}
});
if (response.status === 200) {
res.status(200).send(response.data)
}
console.log('Server Side response.data -->', response.data)
} catch (error) {
console.log('Error -->', error)
res.status(500).send({ error: 'Server Error' });
}
}
export default getApi

Sorry for this late answer,
I just think that this is the default behaviour of NextJS. You are already set, don't worry. There is in contrast, a little bit customization to make if you want to allow external sources fetching your Next API

Related

How would I update the authorization header from a cookie on a graphQL apollo mutation or query

I have the following _app.js for my NextJS app.
I want to change the authorization header on login via a cookie that will be set, I think I can handle the cookie and login functionaility, but I am stuck on how to get the cookie into the ApolloClient headers autorization. Is there a way to pass in a mutation, the headers with a token from the cookie. Any thoughts here???
I have the cookie working, so I have a logged in token, but I need to change the apolloclient Token to the new one via the cookie, in the _app.js. Not sure how this is done.
import "../styles/globals.css";
import { ApolloClient, ApolloProvider, InMemoryCache } from "#apollo/client";
const client = new ApolloClient({
uri: "https://graphql.fauna.com/graphql",
cache: new InMemoryCache(),
headers: {
authorization: `Bearer ${process.env.NEXT_PUBLIC_FAUNA_SECRET}`,
},
});
console.log(client.link.options.headers);
function MyApp({ Component, pageProps }) {
return (
<ApolloProvider client={client}>
<Component {...pageProps} />
</ApolloProvider>
);
}
export default MyApp;
UPDATE:I've read something about setting this to pass the cookie int he apollo docs, but I don't quite understand it.
const link = createHttpLink({
uri: '/graphql',
credentials: 'same-origin'
});
const client = new ApolloClient({
cache: new InMemoryCache(),
link,
});
UPDATE: So I have made good progress with the above, it allows me to pass via the context in useQuery, like below. Now the only problem is the cookieData loads before the use query or something, because if I pass in a api key it works but the fetched cookie gives me invalid db secret and its the same key.
const { data: cookieData, error: cookieError } = useSWR(
"/api/cookie",
fetcher
);
console.log(cookieData);
// const { loading, error, data } = useQuery(FORMS);
const { loading, error, data } = useQuery(FORMS, {
context: {
headers: {
authorization: "Bearer " + cookieData,
},
},
});
Any ideas on this problem would be great.
If you need to run some GraphQL queries after some other data is loaded, then I recommend putting the latter queries in a separate React component with the secret as a prop and only loading it once the former data is available. Or you can use lazy queries.
separate component
const Form = ({ cookieData }) => {
useQuery(FORMS, {
context: {
headers: {
authorization: "Bearer " + cookieData,
},
},
});
return /* ... whatever ... */
}
const FormWrapper = () => {
const { data: cookieData, error: cookieError } = useSWR(
"/api/cookie",
fetcher
);
return cookieData ? <Form cookieData={ cookieData }/> : ...loading
}
I might be missing some nuances with when/how React will mount and unmount the inner component, so I suppose you should be careful with that.
Manual Execution with useLazyQuery
https://www.apollographql.com/docs/react/data/queries/#manual-execution-with-uselazyquery

Access to fetch at 'API_Gateway_URL' from origin 'S3_host_url' has been blocked by CORS policy

I know this question has been asked previously but I couldn't find any answer that solves my problem, so please forgive me if it is repetitive.
I have created a Lambda function that reads data from a DynamoDB table. I created an API gateway for this Lambda function.
When I directly hit the url in my browser, I get the expected result. But when I fetch the URL in my react app, I'm getting the below error(I have hosted my react app on S3 bucket with static website hosting)
Access to fetch at 'API_gateway_url'
from origin 'S3_static_website_endpoint' has been blocked
by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
On searching the web, I found out that I need to set the 'Access-Control-Allow-Origin' header in my Lambda and I have done it, but still I'm getting the same issue.
PS: I'm posting this question after 1 whole day of trial-error and looking at different answers, so if you know the answer please help me!
Lambda function:
console.log('function starts');
const AWS = require('aws-sdk');
const dynamoDB = new AWS.DynamoDB.DocumentClient();
exports.handler = (event, context, callback) => {
function formatResponse(data, code) {
return {
statusCode: code,
headers: {
'Access-Control-Allow-Origin': '*',
"Access-Control-Allow-Credentials" : true,
"Access-Control-Allow-Headers":"X-Api-Key"
},
body: JSON.stringify(data)
}
}
let param = {
TableName: 'tableName',
Limit: 100 //maximum result of 100 items
};
//Will scan your entire table in dynamoDB and return results.
dynamoDB.scan(param, function(err,data){
if(err){
return formatResponse(data, 400);
}else{
return formatResponse(data, 200);
}
});
}
React app:
import React from 'react';
class App extends React.Component {
constructor(props) {
super(props);
this.state = {
isLoading: true,
dataSource: {}
};
}
async componentDidMount() {
try {
const response = await fetch('API_gateway_url');
let responseJson = await response.json();
this.setState(
{
isLoading: false,
dataSource: responseJson
},
function () { }
);
} catch (error) {
console.error(error);
}
}
render() {
let { dataSource } = this.state;
if (this.state.isLoading) {
return <div>Loading...</div>;
} else {
return (
<div>
{dataSource.Items.map(item => (
<div key={item.PlayerId}>
<h1>{item.PlayerId}</h1>
<li>{item.PlayerName}</li>
<li>{item.PlayerPosition}</li>
<li>{item.PlayerNationality}</li>
</div>
))}
</div>
);
}
}
}
export default App;
I suspect that your Lambda is not run for OPTIONS requests (i.e. a "preflight"). You can configure CORS in your API Gateway which should resolve the problem. See Enabling CORS for a REST API resource.
This was resolved by using the cors package.
Implementation can be found here:
https://epsagon.com/blog/aws-lambda-express-getting-started-guide/

Apify: Preserve headers in RequestQueue

I'm trying to crawl our local Confluence installation with the PuppeteerCrawler. My strategy is to login first, then extracting the session cookies and using them in the header of the start url. The code is as follows:
First, I login 'by foot' to extract the relevant credentials:
const Apify = require("apify");
const browser = await Apify.launchPuppeteer({sloMo: 500});
const page = await browser.newPage();
await page.goto('https://mycompany/confluence/login.action');
await page.focus('input#os_username');
await page.keyboard.type('myusername');
await page.focus('input#os_password');
await page.keyboard.type('mypasswd');
await page.keyboard.press('Enter');
await page.waitForNavigation();
// Get cookies and close the login session
const cookies = await page.cookies();
browser.close();
const cookie_jsession = cookies.filter( cookie => {
return cookie.name === "JSESSIONID"
})[0];
const cookie_crowdtoken = cookies.filter( cookie => {
return cookie.name === "crowd.token_key"
})[0];
Then I'm building up the crawler structure with the prepared request header:
const startURL = {
url: 'https://mycompany/confluence/index.action',
method: 'GET',
headers:
{
Accept: 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7',
Cookie: `${cookie_jsession.name}=${cookie_jsession.value}; ${cookie_crowdtoken.name}=${cookie_crowdtoken.value}`,
}
}
const requestQueue = await Apify.openRequestQueue();
await requestQueue.addRequest(new Apify.Request(startURL));
const pseudoUrls = [ new Apify.PseudoUrl('https://mycompany/confluence/[.*]')];
const crawler = new Apify.PuppeteerCrawler({
launchPuppeteerOptions: {headless: false, sloMo: 500 },
requestQueue,
handlePageFunction: async ({ request, page }) => {
const title = await page.title();
console.log(`Title of ${request.url}: ${title}`);
console.log(page.content());
await Apify.utils.enqueueLinks({
page,
selector: 'a:not(.like-button)',
pseudoUrls,
requestQueue
});
},
maxRequestsPerCrawl: 3,
maxConcurrency: 10,
});
await crawler.run();
The by-foot-login and cookie extraction seems to be ok (the "curlified" request works perfectly), but Confluence doesn't accept the login via puppeteer / headless chromium. It seems like the headers are getting lost somehow..
What am I doing wrong?
Without first going into the details of why the headers don't work, I would suggest defining a custom gotoFunction in the PuppeteerCrawler options, such as:
{
// ...
gotoFunction: async ({ request, page }) => {
await page.setCookie(...cookies); // From page.cookies() earlier.
return page.goto(request.url, { timeout: 60000 })
}
}
This way, you don't need to do the parsing and the cookies will automatically be injected into the browser before each page load.
As a note, modifying default request headers when using a headless browser is not a good practice, because it may lead to blocking on some sites that match received headers against a list of known browser fingerprints.
Update:
The below section is no longer relevant, because you can now use the Request class to override headers as expected.
The headers problem is a complex one involving request interception in Puppeteer. Here's the related GitHub issue in Apify SDK. Unfortunately, the method of overriding headers via a Request object currently does not work in PuppeteerCrawler, so that's why you were unsuccessful.

What config do I need to change to allow a POST to an AWS Lambda function?

I have two endpoints in my API Gateway that are setup identically, and point to the same Lambda function. The only difference is that one of the endpoints is a GET and the other is a POST.
Here is my code that is calling the API endpoint.
fetch('api/public/libraries/sign-out', {
method: 'POST',
headers: new Headers({
'Accept': 'application/json',
})
})
.then(response => {
if (!response.ok) {
throw new Error('Failed with HTTP code ' + response.status);
}
return response.json();
})
.catch(error => {
console.log('Error:', error);
})
.then(response => {
console.log(response);
});
When I set the method to GET, the request returns a success.
When I have the method as POST, I get index.tsx:59 Error: Error: Failed with HTTP code 403.
I have set Allowed HTTP Methods
to GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE for the path pattern under which this API lies.
What more could be wrong? Is there perhaps some additional config I need to change to allow a POST? Or is it the Lambda function code that needs to be different when it responds to a POST? Or is it something else that might be the problem?
I'm using Node 6 as runtime environment on the Lambda. Here is the Lambda code:
exports.handler = (event, context, callback) => {
callback(null, {
statusCode: 200,
headers: {},
body: JSON.stringify({
message: 'hello world'
})
});
};

Express lambda return JSON from api call

I have a serverless express app. In the app I have a app.get called '/', which should call an api, retrieve the data from the api and send it back to the user.
https://y31q4zn654.execute-api.eu-west-1.amazonaws.com/dev
I can see data as json on the page being returned.
This is my index.js of the lambda function:
const serverless = require('serverless-http');
const express = require('express');
const request = require('request');
const app = express()
app.get('/', function (req, res) {
var options = { method: 'POST',
url: 'https://some.api.domain/getTopNstc',
headers:
{ 'Content-Type': 'application/json' },
body: {},
json: true
};
request(options, function (error, response, body) {
console.log('request call')
if (error) throw new Error(error);
// res.status(200).send(response);
res.json(response);
});
});
module.exports.handler = serverless(app);
However I would to be able to call the lambda '/' via axios (or other promise-request library)
I've tried to use the following code to make a call to my lambda:
axios.get('https://y31q4zn654.execute-api.eu-west-1.amazonaws.com/dev', {
headers: {
'Content-Type': 'application/json',
},
body:{}
}).then((res) => {
console.log(res);
});
Failed to load
https://y31q4zn654.execute-api.eu-west-1.amazonaws.com/dev: No
'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'myDomain' is therefore not
allowed access. bundle.js:31 Cross-Origin Read Blocking (CORB) blocked
cross-origin response
https://y31q4zn654.execute-api.eu-west-1.amazonaws.com/dev with MIME
type application/json. See
https://www.chromestatus.com/feature/5629709824032768 for more
details.
Api gateway config:
I concur with #KMo. Pretty sure this is a CORS issue. There is a module in npm exactly for this purpose, read up about it here.
To install it, run npm install -s cors
Then in your express app, add the following:
const express = require('express');
const app = express();
const cors = require('cors');
app.use(cors());