How would I update the authorization header from a cookie on a graphQL apollo mutation or query - cookies

I have the following _app.js for my NextJS app.
I want to change the authorization header on login via a cookie that will be set, I think I can handle the cookie and login functionaility, but I am stuck on how to get the cookie into the ApolloClient headers autorization. Is there a way to pass in a mutation, the headers with a token from the cookie. Any thoughts here???
I have the cookie working, so I have a logged in token, but I need to change the apolloclient Token to the new one via the cookie, in the _app.js. Not sure how this is done.
import "../styles/globals.css";
import { ApolloClient, ApolloProvider, InMemoryCache } from "#apollo/client";
const client = new ApolloClient({
uri: "https://graphql.fauna.com/graphql",
cache: new InMemoryCache(),
headers: {
authorization: `Bearer ${process.env.NEXT_PUBLIC_FAUNA_SECRET}`,
},
});
console.log(client.link.options.headers);
function MyApp({ Component, pageProps }) {
return (
<ApolloProvider client={client}>
<Component {...pageProps} />
</ApolloProvider>
);
}
export default MyApp;
UPDATE:I've read something about setting this to pass the cookie int he apollo docs, but I don't quite understand it.
const link = createHttpLink({
uri: '/graphql',
credentials: 'same-origin'
});
const client = new ApolloClient({
cache: new InMemoryCache(),
link,
});
UPDATE: So I have made good progress with the above, it allows me to pass via the context in useQuery, like below. Now the only problem is the cookieData loads before the use query or something, because if I pass in a api key it works but the fetched cookie gives me invalid db secret and its the same key.
const { data: cookieData, error: cookieError } = useSWR(
"/api/cookie",
fetcher
);
console.log(cookieData);
// const { loading, error, data } = useQuery(FORMS);
const { loading, error, data } = useQuery(FORMS, {
context: {
headers: {
authorization: "Bearer " + cookieData,
},
},
});
Any ideas on this problem would be great.

If you need to run some GraphQL queries after some other data is loaded, then I recommend putting the latter queries in a separate React component with the secret as a prop and only loading it once the former data is available. Or you can use lazy queries.
separate component
const Form = ({ cookieData }) => {
useQuery(FORMS, {
context: {
headers: {
authorization: "Bearer " + cookieData,
},
},
});
return /* ... whatever ... */
}
const FormWrapper = () => {
const { data: cookieData, error: cookieError } = useSWR(
"/api/cookie",
fetcher
);
return cookieData ? <Form cookieData={ cookieData }/> : ...loading
}
I might be missing some nuances with when/how React will mount and unmount the inner component, so I suppose you should be careful with that.
Manual Execution with useLazyQuery
https://www.apollographql.com/docs/react/data/queries/#manual-execution-with-uselazyquery

Related

How to block external requests to a NextJS Api route

I am using NextJS API routes to basically just proxy a custom API built with Python and Django that has not yet been made completely public, I used the tutorial on Vercel to add cors as a middleware to the route however it hasn't provided the exact functionality I wanted.
I do not want to allow any person to make a request to the route, this sort of defeats the purpose for but it still at least hides my API key.
Question
Is there a better way of properly stopping requests made to the route from external sources?
Any answer is appreciated!
// Api Route
import axios from "axios";
import Cors from 'cors'
// Initializing the cors middleware
const cors = Cors({
methods: ['GET', 'HEAD'],
allowedHeaders: ['Content-Type', 'Authorization','Origin'],
origin: ["https://squadkitresearch.net", 'http://localhost:3000'],
optionsSuccessStatus: 200,
})
function runMiddleware(req, res, fn) {
return new Promise((resolve, reject) => {
fn(req, res, (res) => {
if (res instanceof Error) {
return reject(res)
}
return resolve(res)
})
})
}
async function getApi(req, res) {
try {
await runMiddleware(req, res, cors)
const {
query: { url },
} = req;
const URL = `https://xxx/api/${url}`;
const response = await axios.get(URL, {
headers: {
Authorization: `Api-Key xxxx`,
Accept: "application/json",
}
});
if (response.status === 200) {
res.status(200).send(response.data)
}
console.log('Server Side response.data -->', response.data)
} catch (error) {
console.log('Error -->', error)
res.status(500).send({ error: 'Server Error' });
}
}
export default getApi
Sorry for this late answer,
I just think that this is the default behaviour of NextJS. You are already set, don't worry. There is in contrast, a little bit customization to make if you want to allow external sources fetching your Next API

How to initialize ApolloClient in SvelteKit to work on both SSR and client side

I tried but didn't work. Got an error: Error when evaluating SSR module /node_modules/cross-fetch/dist/browser-ponyfill.js:
<script lang="ts">
import fetch from 'cross-fetch';
import { ApolloClient, InMemoryCache, HttpLink } from "#apollo/client";
const client = new ApolloClient({
ssrMode: true,
link: new HttpLink({ uri: '/graphql', fetch }),
uri: 'http://localhost:4000/graphql',
cache: new InMemoryCache()
});
</script>
With SvelteKit the subject of CSR vs. SSR and where data fetching should happen is a bit deeper than with other somewhat "similar" solutions. The bellow guide should help you connect some of the dots, but a couple of things need to be stated first.
To define a server side route create a file with the .js extension anywhere in the src/routes directory tree. This .js file can have all the import statements required without the JS bundles that they reference being sent to the web browser.
The #apollo/client is quite huge as it contains the react dependency. Instead, you might wanna consider importing just the #apollo/client/core even if you're setting up the Apollo Client to be used only on the server side, as the demo bellow shows. The #apollo/client is not an ESM package. Notice how it's imported bellow in order for the project to build with the node adapter successfully.
Try going though the following steps.
Create a new SvelteKit app and choose the 'SvelteKit demo app' in the first step of the SvelteKit setup wizard. Answer the "Use TypeScript?" question with N as well as all of the questions afterwards.
npm init svelte#next demo-app
cd demo-app
Modify the package.json accordingly. Optionally check for all packages updates with npx npm-check-updates -u
{
"name": "demo-app",
"version": "0.0.1",
"scripts": {
"dev": "svelte-kit dev",
"build": "svelte-kit build --verbose",
"preview": "svelte-kit preview"
},
"devDependencies": {
"#apollo/client": "^3.3.15",
"#sveltejs/adapter-node": "next",
"#sveltejs/kit": "next",
"graphql": "^15.5.0",
"node-fetch": "^2.6.1",
"svelte": "^3.37.0"
},
"type": "module",
"dependencies": {
"#fontsource/fira-mono": "^4.2.2",
"#lukeed/uuid": "^2.0.0",
"cookie": "^0.4.1"
}
}
Modify the svelte.config.js accordingly.
import node from '#sveltejs/adapter-node';
export default {
kit: {
// By default, `npm run build` will create a standard Node app.
// You can create optimized builds for different platforms by
// specifying a different adapter
adapter: node(),
// hydrate the <div id="svelte"> element in src/app.html
target: '#svelte'
}
};
Create the src/lib/Client.js file with the following contents. This is the Apollo Client setup file.
import fetch from 'node-fetch';
import { ApolloClient, HttpLink } from '#apollo/client/core/core.cjs.js';
import { InMemoryCache } from '#apollo/client/cache/cache.cjs.js';
class Client {
constructor() {
if (Client._instance) {
return Client._instance
}
Client._instance = this;
this.client = this.setupClient();
}
setupClient() {
const link = new HttpLink({
uri: 'http://localhost:4000/graphql',
fetch
});
const client = new ApolloClient({
link,
cache: new InMemoryCache()
});
return client;
}
}
export const client = (new Client()).client;
Create the src/routes/qry/test.js with the following contents. This is the server side route. In case the graphql schema doesn't have the double function specify different query, input(s) and output.
import { client } from '$lib/Client.js';
import { gql } from '#apollo/client/core/core.cjs.js';
export const post = async request => {
const { num } = request.body;
try {
const query = gql`
query Doubled($x: Int) {
double(number: $x)
}
`;
const result = await client.query({
query,
variables: { x: num }
});
return {
status: 200,
body: {
nodes: result.data.double
}
}
} catch (err) {
return {
status: 500,
error: 'Error retrieving data'
}
}
}
Add the following to the load function of routes/todos/index.svelte file within <script context="module">...</script> tag.
try {
const res = await fetch('/qry/test', {
method: 'POST',
credentials: 'same-origin',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
num: 19
})
});
const data = await res.json();
console.log(data);
} catch (err) {
console.error(err);
}
Finally execute npm install and npm run dev commands. Load the site in your web browser and see the server side route being queried from the client whenever you hover over the TODOS link on the navbar. In the console's network tab notice how much quicker is the response from the test route on every second and subsequent request thanks to the Apollo client instance being a singleton.
Two things to have in mind when using phaleth solution above: caching and authenticated requests.
Since the client is used in the endpoint /qry/test.js, the singleton pattern with the caching behavior makes your server stateful. So if A then B make the same query B could end up seeing some of A data.
Same problem if you need authorization headers in your query. You would need to set this up in the setupClient method like so
setupClient(sometoken) {
...
const authLink = setContext((_, { headers }) => {
return {
headers: {
...headers,
authorization: `Bearer ${sometoken}`
}
};
});
const client = new ApolloClient({
credentials: 'include',
link: authLink.concat(link),
cache: new InMemoryCache()
});
}
But then with the singleton pattern this becomes problematic if you have multiple users.
To keep your server stateless, a work around is to avoid the singleton pattern and create a new Client(sometoken) in the endpoint.
This is not an optimal solution: it recreates the client on each request and basically just erases the cache. But this solves the caching and authorization concerns when you have multiple users.

Unable to set authorization header using Apollo's setContext()

So I'm trying to pass an authorization header to Apollo according to the docs on their site.
https://www.apollographql.com/docs/react/networking/authentication/?fbclid=IwAR17YAJ0VA5G8InmAJ_PxcukXKNQxFLFI8aeT4oB7wYE3DjWNaB_F67__zs
However, when I try to log request.headers on my server I don't see the authorization property on there. Next I checked whether or not the function setContext() was being run at all by putting a console.log() within it and found that it didn't run.
const httpLink = new HttpLink({ uri: 'http://localhost:4000' });
const authLink = setContext((request, { headers }) => {
const token = localStorage.getItem('library-user-token')
console.log(token) //doesn't log at all
return ({
headers: {
...headers,
authorization: token ? `Bearer ${token}` : null,
}
})
});
const client = new ApolloClient({
link: authLink.concat(httpLink),
cache: new InMemoryCache()
});
I have almost identical code in my project and it works. Are you passing the client to your ApolloProvider?

How can I set a cookie with Relay?

I've got express js server code:
...
const server = new GraphQLServer({
typeDefs: `schema.graphql`,
resolvers,
context: context => {
let cookie = get(context, 'request.headers.cookie');
return { ...context, cookie, pubsub };
},
});
such that I can attach cookie to resolvers' requests:
...
method: 'GET',
headers: {
cookie: context.cookie,
},
Now I want to be able to use Relay (as a GraphQL client) and I want to be able to attach a cookie to Relay's requests as well.
I've found a similar question but it's not clear to me where can I insert that code:
Relay.injectNetworkLayer(
new Relay.DefaultNetworkLayer('/graphql', {
credentials: 'same-origin',
})
);
since I don't import Relay in Environment.js.
Update: I tried to add
import { Relay, graphql, QueryRenderer } from 'react-relay';
Relay.injectNetworkLayer(
new Relay.DefaultNetworkLayer('http://example.com/graphql', {
credentials: 'same-origin',
})
);
to a file where I send GraphQL queries (e.g., client.js), but it says that Relay is undefined.
Update #2: this repo looks interesting.

Apify: Preserve headers in RequestQueue

I'm trying to crawl our local Confluence installation with the PuppeteerCrawler. My strategy is to login first, then extracting the session cookies and using them in the header of the start url. The code is as follows:
First, I login 'by foot' to extract the relevant credentials:
const Apify = require("apify");
const browser = await Apify.launchPuppeteer({sloMo: 500});
const page = await browser.newPage();
await page.goto('https://mycompany/confluence/login.action');
await page.focus('input#os_username');
await page.keyboard.type('myusername');
await page.focus('input#os_password');
await page.keyboard.type('mypasswd');
await page.keyboard.press('Enter');
await page.waitForNavigation();
// Get cookies and close the login session
const cookies = await page.cookies();
browser.close();
const cookie_jsession = cookies.filter( cookie => {
return cookie.name === "JSESSIONID"
})[0];
const cookie_crowdtoken = cookies.filter( cookie => {
return cookie.name === "crowd.token_key"
})[0];
Then I'm building up the crawler structure with the prepared request header:
const startURL = {
url: 'https://mycompany/confluence/index.action',
method: 'GET',
headers:
{
Accept: 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7',
Cookie: `${cookie_jsession.name}=${cookie_jsession.value}; ${cookie_crowdtoken.name}=${cookie_crowdtoken.value}`,
}
}
const requestQueue = await Apify.openRequestQueue();
await requestQueue.addRequest(new Apify.Request(startURL));
const pseudoUrls = [ new Apify.PseudoUrl('https://mycompany/confluence/[.*]')];
const crawler = new Apify.PuppeteerCrawler({
launchPuppeteerOptions: {headless: false, sloMo: 500 },
requestQueue,
handlePageFunction: async ({ request, page }) => {
const title = await page.title();
console.log(`Title of ${request.url}: ${title}`);
console.log(page.content());
await Apify.utils.enqueueLinks({
page,
selector: 'a:not(.like-button)',
pseudoUrls,
requestQueue
});
},
maxRequestsPerCrawl: 3,
maxConcurrency: 10,
});
await crawler.run();
The by-foot-login and cookie extraction seems to be ok (the "curlified" request works perfectly), but Confluence doesn't accept the login via puppeteer / headless chromium. It seems like the headers are getting lost somehow..
What am I doing wrong?
Without first going into the details of why the headers don't work, I would suggest defining a custom gotoFunction in the PuppeteerCrawler options, such as:
{
// ...
gotoFunction: async ({ request, page }) => {
await page.setCookie(...cookies); // From page.cookies() earlier.
return page.goto(request.url, { timeout: 60000 })
}
}
This way, you don't need to do the parsing and the cookies will automatically be injected into the browser before each page load.
As a note, modifying default request headers when using a headless browser is not a good practice, because it may lead to blocking on some sites that match received headers against a list of known browser fingerprints.
Update:
The below section is no longer relevant, because you can now use the Request class to override headers as expected.
The headers problem is a complex one involving request interception in Puppeteer. Here's the related GitHub issue in Apify SDK. Unfortunately, the method of overriding headers via a Request object currently does not work in PuppeteerCrawler, so that's why you were unsuccessful.