apollo-server-plugin-response-cache: How to invalid cache? - apollo

I'm using the apollo-server-plugin-response-cache package, set up in the server creation like this
const responseCachePlugin = require('apollo-server-plugin-response-cache');
const server = new ApolloServer({
// ...
plugins: [responseCachePlugin()],
});
However I can't find in the documentation how to delete or update a value from the cache when an event occurs, like a mutation query. How can this be achieved inside the GraphQL resolver responsible for this?

Related

AWS Amplify environment variable coming through as undefined when called

I'm launching my nextjs application through AWS Amplify and I have an environment variable that return undefined with called in my node application.
I defined the environment variable from the Amplify console shown here:
Amplify Environment Variable
The variable STRAPI is a 256 character string which according to the documentation I found should be ok.
Here is the code that uses the environment variables:
import qs from "qs";
/**
* Get full Strapi URL from path
* #param {string} path Path of the URL
* #returns {string} Full Strapi URL
*/
export function getStrapiURL(path = "") {
return `${
process.env.NEXT_PUBLIC_STRAPI_API_URL || "http://localhost:1337"
}${path}`;
}
/**
* Helper to make GET requests to Strapi API endpoints
* #param {string} path Path of the API route
* #param {Object} urlParamsObject URL params object, will be stringified
* #param {Object} options Options passed to fetch
* #returns Parsed API call response
*/
export async function fetchAPI(path, urlParamsObject = {}, options = {}) {
// Merge default and user options
const token = process.env.STRAPI;
console.log("Token in process.env:");
console.log(process.env.STRAPI);
console.log("Token in token var:");
console.log(token);
const mergedOptions = {
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${token}`,
},
...options,
};
// Build request URL
const queryString = qs.stringify(urlParamsObject);
const requestUrl = `${getStrapiURL(
`/api${path}${queryString ? `?${queryString}` : ""}`
)}`;
console.log("Request url:");
console.log(requestUrl);
// Trigger API call
const response = await fetch(requestUrl, mergedOptions);
// Handle response
if (!response.ok) {
console.error(response.statusText);
throw new Error(`An error occurred please try again`);
}
const data = await response.json();
return data;
}
This code was lifted from another project credit to its original author
NEXT_PUBLIC_STRAPI_API_URL comes out fine it's just the STRAPI environment variable that comes out undefined.
Side note: I'm fully aware you shouldn't log tokens and I 100% plan on regenerating the token once the issue is resolved.
Here are the logs from CloudWatch and as you can see the NEXT_PUBLIC_STRAPI_API_URL comes out fine.
cloudfront logs
I'm not sure the issue here, but I've tried the following:
Giving the STRAPI var a new value and it still came out as undefined after redeploying.
I delete the env var and created a new one to see if there was some bug there and that did not resolve it after redeploying.
I verified the setup is working locally on my machine.
Not sure what next steps to take here.
The problem is NextJS needed the environment variable to be prefixed with NEXT_PUBLIC_ to be exposed to the application. This was introduced in about NextJS 9. It's been awhile since I used NextJS.

Is there to run only part of requests in Postman?

We have a lot of API level automated tests written as collections of requests in Postman.
We have a script to run all collections in automated manner.
Is there a way to label/run only subset of requests e.g. with some label e.g. as smoke suite, without copying requests to new collection(s) and run then explicitly (as this yields the need to maintain same tests in 2 places...)?
There might be labels, groups or some script that skips the request is env variable is set...
you can create folders and organize test like
smoke_and_regression
smoke_only etc
you can specify which folder to run using --folder arguent when using newman as command line tool
you can also control the execution flow using postman.setNextRequest .
and also you can run newman as an npm module.
you just need to write a logic to read the collection file and get all folder names containing "smoke" for eg and pass it as an array
const newman = require('newman'); // require newman in your project
// call newman.run to pass `options` object and wait for callback
newman.run({
collection: require('./sample-collection.json'),
reporters: 'cli',
folder: folders
}, function (err) {
if (err) { throw err; }
console.log('collection run complete!');
});
Just update for the comments:
in old and new UI you can select which folder to execute in collection runner
Get all requests in the collection:
you can also get information about all the requests in a collection by using :
https://api.getpostman.com/collections/{{collection_UUID}}
to get uuid and api key goto :
https://app.getpostman.com
Now for generating api key >
goto account settings > api key and generate api key.
to get collection uuid goto specific workspace and collection and copy the uuid part from url:
Now in your collection
Rename all requests as:
get user details [Regression][Smoke][Somethingelse]
get account details [Regression]
Then Create a new request called initial request and keep it as the first request in your collection:
url: https://api.getpostman.com/collections/8xxxxtheuuidyoucopied
authorization: apikey-header : key: X-Api-Key and value: yourapikey
test-script :
pm.environment.unset("requestToRun")
reqeustlist = pm.response.json().collection.item.map((a) => a.name)
requestToRun = reqeustlist.filter((a) => a.includes(pm.environment.get("tag")))
let val = requestToRun.pop()
pm.environment.set("requestToRun", requestToRun)
val ? postman.setNextRequest(val) : postman.setNextRequest(null)
Now set the envirnoment variable as what you want to look for eg: run script that contains text "Regression" then set pm.environment.set("tag","Regression")
Now in your collection-pre-request add:
if (pm.info.requestName !== "initial request") {
let requestToRun = pm.environment.get("requestToRun")
let val = requestToRun.pop()
pm.environment.set("requestToRun", requestToRun)
val ? postman.setNextRequest(val) : postman.setNextRequest(null)
}
Output:
Example collection:
https://www.getpostman.com/collections/73e771fe61f7781f8598
Ran only reqeusts that has "Copy" in its name

How do I call another micro-service from my micro-service?

This might sound a little odd, but I'm facing a situation where I have a micro-service that assembles some pricing logic, but for that, it needs a bunch of information that another micro-service provides.
I believe I have two options: (1) grab all the data I need from the database and ignore the GraphQL work that was done in this other micro-service or (2) somehow hit this other micro-service from within my current service and get the data I need.
How would someone accomplish (2)?
I have no clear path of how to get that done without creating a mess.
I imagine that turning my pricing micro-service into a small client could work, but I'm just not sure if that's bad practice.
After much consideration and reading the answers I got here, I decided to turn my micro-service into a mini-client by using apollo-client.
In short, I have something like this:
import { ApolloClient } from 'apollo-client';
import { InMemoryCache } from 'apollo-cache-inmemory';
import { HttpLink } from 'apollo-link-http';
// Instantiate required constructor fields
const cache = new InMemoryCache();
const link = new HttpLink({
uri: 'http://localhost:3000/graphql',
});
const client = new ApolloClient({
// Provide required constructor fields
cache: cache,
link: link,
});
export default client;
That HttpLink is the federated schema, so I can call it from my resolver or anywhere else like this:
const query = gql`
query {
something(uid: "${uid}") {
field1
field2
field3
anotherThing {
field1
field2
}
}
}
`;
const response = await dataSources.client.query({query});

How to get subscription data from client cache?

i'm new to all the hot graphql/apollo stuff.
I have a subscription which gets a search result:
export const SEARCH_RESULTS_SUBSCRIPTION = gql`
subscription onSearchResultsRetrieved($sid: String!) {
searchResultsRetrieved(sid: $sid) {
status
clusteredOffers {
id
}
}
}
`;
Is it possible to query the "status" field from client cache if i need it inside another component? Or do i have to use an additional ?
In the apollo dev-tools i can see that there is a cache entry under "ROOT_SUBSCRIPTION" not "ROOT_QUERY". What does that mean?
....thanks
I found out that subscribeToMore is my friend to solve this.
At first i wrote a normal query for the data i want to subscribe to have cached data, then the cache will be updated by the subscription.
<3 apollo

aws-sdk-js transact operations not exposed to DocumentClient?

I'm developing some business administration software in JS and I find myself in need of ACID transactions with DynamoDb. Lucky of mine, AWS just released the transactGet and transactWrite APIs that covers just the right use case!
I'm using the AWS.DynamoDb.DocumentClient object to make my other calls and it seems like the transact operations are not exposed for me to use.
I hacked inside the aws-sdk code and found the interface exports like such ("aws-sdk": "^2.368.0", document_client.d.js, line 2076):
export interface TransactWriteItem {
/**
* A request to perform a check item operation.
*/
ConditionCheck?: ConditionCheck;
/**
* A request to perform a PutItem operation.
*/
Put?: Put;
/**
* A request to perform a DeleteItem operation.
*/
Delete?: Delete;
/**
* A request to perform an UpdateItem operation.
*/
Update?: Update;
}
However, whenever I try to call the client with this methods, I get back a Type error, namely:
TypeError: dynamoDb[action] is not a function
Locally, I can just hack the sdk and expose everything, but it is not accepted on my deployment environment.
How should I proceed?
Thank you very much!
Edit:
If it is worth anything, there is the code I'm using to do the calls:
dynamo-lib.js:
import AWS from "aws-sdk";
export function call(action, params) {
const dynamoDb = new AWS.DynamoDB.DocumentClient();
return dynamoDb[action](params).promise();
}
Lambda code:
import * as dynamoDbLib from '../libs/dynamodb-lib';
import { success, failure } from '../libs/response-lib';
export default async function main(params, callback) {
try {
const result = await dynamoDbLib.call("transactWrite", params);
callback(null, success(result));
} catch (e) {
console.log(e);
callback(null, failure({"status": "Internal server error"}));
}
}
Edit 2:
Seems like the document client really does not export the method.
AWS.DynamoDB.DocumentClient = AWS.util.inherit({
/**
* #api private
*/
operations: {
batchGetItem: 'batchGet',
batchWriteItem: 'batchWrite',
putItem: 'put',
getItem: 'get',
deleteItem: 'delete',
updateItem: 'update',
scan: 'scan',
query: 'query'
},
...
As mentioned in accepted answer, the workaround is to use the transactWriteItems method straight from the DynamoDB object.
Thanks for the help! =D
Cheers!
UPDATE:
The issue has been resolved
Github Thread
AWS SDK is not currently supporting transactions in dynamodb document client i have raised the issue on github a current workaround is just not to use document client
let aws=require("aws-sdk");
aws.config.update(
{
region:"us-east-1",
endpoint: "dynamodb.us-east-1.amazonaws.com"
}
);
var dynamodb = new aws.DynamoDB();
dynamodb.transactWriteItems();
Also make sure you are using SDK v2.365.0 plus
Update the aws-sdk package (>2.365)
Use
var dynamodb = new aws.DynamoDB();
instead of
var dynamodb = new aws.DynamoDB.DocumentClient();